Print This Post Print This Post 1,723 views
Feb 09

Scientists Building Robots an Internet of Their Own.

Uncategorized Comments Off on Scientists Building Robots an Internet of Their Own.

Reported by Kyle VanHemert in Gizmodo, Feb 9, 2011.

At the Swiss Federal Institute of Technology in Zurich, scientists are building RoboEarth, a sort of Wikipedia for robots that will let them independently share instructions for tasks they’ve mastered. Needless to say, Oh shit!

Dr Markus Waibel, a RoboEarth researcher, explains that a lack of standardization is keeping robots isolated and largely ineffective at actually helping humans in day to day life. RoboEarth would be a communication system and database for robots to upload, exchange, and download knowledge on a variety of topics. It could teach them how to clean up, say, or how to set the table. The RoboEarth site outlines the scope of the project:

RoboEarth will include everything needed to close the loop from robot to RoboEarth to robot. The RoboEarth World-Wide-Web style database will be implemented on a Server with Internet and Intranet functionality. It stores information required for object recognition (e.g., images, object models), navigation (e.g., maps, world models), tasks (e.g., action recipes, manipulation strategies) and hosts intelligent services (e.g., image annotation, offline learning).

“They key,” Dr. Waibel told the BBC, “is allowing robots to share knowledge. That’s really new.”

RoboEarth Diagram

The four-year project is funded by the European Union and currently employing 35 researchers. So far, they’ve been successful in getting robots to upload updated maps of locations, download a handful of descriptions of tasks and execute them.

Now, Kyle VanHemert knows, getting all alarmist about a robot uprising is kinda getting old at this point, but this does seem a little bit worrisome! Surely all sorts of protocols will be in place to prevent, you know, undesirable tasks from getting disseminated (from the “mess up my human owner’s apartment!” robo-prank to the “tear my human owner limb from limb!” potentiality), but he is still not sure he likes the idea of his Roomba being able to download the knowledge of robot hivemind over my Wi-Fi connection. He has got torrents to download. [BBC via RoboEarth]

Print This Post Print This Post 1,648 views
Feb 06

Hidden Fractals Suggest Answer to Ancient Math Problem.

Uncategorized Comments Off on Hidden Fractals Suggest Answer to Ancient Math Problem.

Reported by Dave Mosher, in Wired Science, January 28, 2011.

Video: Zooming in on the Mandelbrot set, a famous fractal that illustrates repeating patterns in an infinite series./YouTube, gooozz.

Researchers have found a fractal pattern underlying everyday math. In the process, they’ve discovered a way to calculate partition numbers, a challenge that’s stymied mathematicians for centuries.

Partition numbers track the different ways an integer can be divvied up. The number 3, for example, has three unique partitions: 3, 2 + 1, and 1 + 1 + 1. Partition numbers grow so fast that mathematicians have a hard time predicting them.

“The number 10 has 42 partitions, but with 100 you have 190,569,292 partitions. They get impossibly huge to add up,” said mathematician Ken Ono of Emory University.

Since the 18th century, generations of mathematicians have tried to find a way of predicting large partition numbers. Srinivasa Ramanujan, a self-taught prodigy from a remote Indian village, found a way to approximate partition numbers in 1919. Yet before he could expand on the work, and convert it to a clean equation, he died in 1920 at the age of 32. Mathematicians ever since have puzzled over Ramanujan’s manuscripts, which tie the primes 5, 7 and 11 to partition numbers.

Ken Ono and Zachary Kent./Emory University.

Inspired by Ramanujan’s work and that of the late mathematician A.O.L. Atkin, Emory mathematicians Amanda Folsom and Zachary Kent joined Ono to discover an infinite, fractal-like pattern to the series. It is described in a paper hosted by the American Institute of Mathematics.

“It was like living in a darkened home for years, and then finally someone turned on the lights. When Zach and I realized the structure, we knew we were right,” Ono wrote in an e-mail to Wired.com. “We see the same mathematical structures over and over and over again, similar to how you see repeating elements in the Mandelbrot set as you fly through it. That’s why we say they’re fractal,” he said.

In a separate paper, Ono and Jan Hendrik Bruinier of Germany’s Darmstadt Technical University describe a function, deemed “P,” that can churn out any integer’s partition number.

The combined research doesn’t quite reveal a mathematical representation of the universe’s structure, Ono said, but it does kill partition numbers as a way to encrypt computer data.

“Nobody’s ever going to do that now, since we now know partition numbers aren’t random,” Ono said. “They’re completely predictable and we should no longer pretend they’re mysterious.”

The discoveries should help solve similar problems in number theory, but Ono said he’s most excited about closing an exceptionally “frustrating but romantic” chapter in mathematical history.

Via: eScienceCommons

Print This Post Print This Post 1,387 views
Feb 05

Engineering Solutions to Biomedical Problems.

Uncategorized Comments Off on Engineering Solutions to Biomedical Problems.

Reported by Nancy Volkers in Science Careers on December 17, 2010.

Biomedical engineering plays a crucial role in translational research, and degree programs in the discipline are now offered at universities around the world. (See, for example, the June Science Careers article ” Designing a Career in Biomedical Engineering.”) At the same time, many classically trained engineers are working at the interface of engineering and medicine. Science Careers spoke to five such scientists whose work involves engineering solutions to human health problems.

Applied fluid dynamics

Some engineers use computer models to test new designs of airplane wings or engines. Alison Marsden applies them to children’s hearts.

Normally, the heart’s right ventricle pumps blood to the lungs and the left ventricle pumps oxygenated blood throughout the body. In children with single ventricle defects, only one of these pumps is functioning. Marsden’s team has developed a variation of the common surgical treatment for this defect, called Fontan reconstruction, that they believe will improve patient outcomes. Their research uses MRIs from patients to create computer models that help surgeons decide between the traditional surgery and their variation. The models also help surgeons determine where to disconnect and reconnect blood vessels and the optimal angles of connection.

Marsden, an assistant professor of mechanical and aerospace engineering at the University of California (UC), San Diego, is creating similar models for other heart conditions, including Kawasaki disease and coronary artery bypass grafting. The models would allow cardiologists and surgeons to tailor surgery for each patient based on individual anatomy, blood-flow patterns, and other factors. “We want to be able to customize surgery like you would customize dental implants or an orthopedic device,” she says.

The application of fluid dynamics — or any other engineering discipline — to medicine wasn’t on Marsden’s radar until late in her education. Her father, a mathematician, gave her math workbooks that he had devised. Her mother, an avid hiker and photographer, signed her up for science workshops. She gravitated toward mechanical engineering, earning an undergraduate degree at Princeton University and a master’s degree and Ph.D. at Stanford University.

“After my Ph.D., I was trying to decide what to do next,” she says. “I enjoyed the work I had done but was looking for something more people-related.” A conversation with Stanford’s Charles A. Taylor led to a postdoctoral stint in his lab, the Cardiovascular Biomechanics Laboratory in the Department of Bioengineering. “We realized I could apply tools I had to some of the projects ongoing in his lab.”

Although she had no formal training in biology or medicine, Marsden, now 34, found it easy to pick up medical terminology and concepts. “If I had known earlier on that I was going to get into biomedical engineering, I may have taken some formal courses,” she says. “But I found it was easier to pick up the medical terminology than it would have been the other way around — to suddenly have to understand partial differential equations, for example.”

The interdisciplinary nature of her work requires good communication skills and good organizational skills. “You need to communicate with people from many different areas,” she says. “You have to explain your work to someone outside your field, such as a cardiologist. Sometimes I have to walk into a room and talk to a patient’s family about joining a research study.”

Marsden sees biomedical engineering and other interdisciplinary sciences as hot spots for young women. “I think women sometimes get scared away from traditional, focused engineering fields, but they often have skills that are very applicable to fields like bioengineering,” she says. “You may end up managing a team of people, all with different backgrounds, and you have to relate to all of them in some way. I think women can really shine at that.”

From airplane structure to cell mechanics

Stephanie Pulford, 31, can pinpoint the moment biomedical engineering entered her life. While working as an aircraft structural engineer for US Airways, she began reading about bones and how mechanical stress affects their form and function. “You can see that bones form load-bearing structures that line up with the direction of maximum stress,” she says. She became fascinated with the similarities between these structures and structures that humans had engineered for their own use.

That interest led her to the graduate program in mechanical and aerospace engineering at UC Davis, where she is a fourth-year doctoral student. Pulford studies the mechanics of how cells move, in the lab of Alex Mogilner. “I used to take cell movement for granted, but it’s very complicated,” she says. “A cell grabs and pulls and drags and adjusts itself to its surroundings. It’s very much a structural process. There’s a lot that engineering has to offer.”

Pulford treats the cell as a black box, gathering observations from the outside. What forces does a cell put on its environment? What shape does it take? “Then I take an educated guess about what the cell might be doing internally and I write a computer model to simulate it,” she says. “I can test a lot of hypotheses, knock out the ones that don’t work, and see if my models can predict movement.”

Because she came to UC Davis with an undergraduate degree that was “straight-up engineering,” Pulford took classes and attended seminars to strengthen her background in biology. In her second year, Pulford was one of seven scholars in UC Davis’ Integrating Medicine into Basic Science program. The Howard Hughes Medical Institute (HHMI) sponsors 23 of these so-called Med into Grad programs nationwide, designed to introduce biomedical engineering graduate students to clinical medicine.

The program showed her the importance of collaboration and partnership in biology, medicine, and engineering, she says. “I was constantly exposed to new modes of thought, and new problems for engineering to solve. I’d certainly been interested in interdisciplinary work before the HHMI program, but the program gave me a great opportunity to see just how well that kind of scientific cross-pollination works.”

Making it work, Pulford says, requires communication skills. “Within engineering there’s a lexicon, a way of talking about things, and within medical fields you speak a language, too. You need to be able not only to get information but also describe your research in a useful way to people in other fields. You have to be a translator as well as a researcher.”

Putting the “bio” in biomechanics

As an undergraduate student in mechanical engineering at the University of Florida, Marc Levenston was intrigued by biomedical engineering but had only a basic biology background. His Ph.D. research at Stanford focused on computer simulations of how mechanical stresses affect bone growth and adaptive changes around implants. He was doing biomechanics but “with a little ‘b’ and a big ‘M’,” he says. Wanting to pursue more experimental research, he headed to the Massachusetts Institute of Technology for a postdoc to join a lab of engineers who focused on cell and tissue culture experiments.

At Stanford University’s Mechanical Engineering Department, where he is now an associate professor, Levenston, 44, emphasizes the bio side of biomechanics. One of his project areas involves cartilage in the human knee, elucidating the series of events that lead to osteoarthritis. Using tissue culture models of cartilage and the meniscus, Levenston’s team stresses the cells and tissues and studies what happens to them structurally and biochemically — for example, whether different genes are expressed, whether metabolism changes, or whether different types of cartilage cells respond differently to the same stress.

One aim of this work is to find ways to detect the disease at its earliest stages, even before symptoms start. “People study osteoarthritis as a disease of cartilage,” Levenston says. “But does it start in the cartilage or elsewhere in the joint?”

Another of Levenston’s projects examines how cells’ physical and mechanical environment drives the fate of certain types of adult stem cells. “We put them in a 3D environment where we can compress or stretch them, simulating the regional mechanics of what happens in the body,” Levenston explains. “Then we see how that interacts with biochemical cues. … Can the mechanical environment influence what the cell becomes?”

If mechanical stresses can influence cell fate — and Levenston’s research is showing that it can — then one day it may be possible to use these biomechanical manipulations to engineer cartilage, ligament, or muscle. This type of research could also help tissues heal themselves. “Sometimes healing doesn’t happen the way we want it to,” he says. “This research has applications for how to encourage normal healing.”

Despite his firm presence in the world of biology, Levenston still defines himself as an engineer. “I’m not a biologist, but I’m using tools of biologists to probe a mechanical system,” he says. “Engineers are not going to replace biologists, nor should we try to.”

For classically trained engineers, Levenston’s advice is to be open to possibilities outside your field. “You should be willing to consider areas of research you’ve never thought of, which would require skills you’d never thought of,” he says. “The value of having engineers in medicine is that we’re trained in a different way and approach problems in a different way.”

The immune system: “A very complex chemical plant”

Yale University associate professor Tarek Fahmy gave up a Sunday afternoon last month to talk about his research to a room full of reporters. Fahmy, who worked as a chemical engineer at DuPont for 5 years before doing a Ph.D. in immunopathology, had just explained the complex relationship between the lymphatic system and the circulatory system when a reporter interrupted him.

“How much is chemical engineering a part of what you do?” the reporter asked. “It’s all we do,” Fahmy replied. The immune system “is all plumbing. It’s all pumps and pipes and check valves. It’s a very complex chemical plant.”

Fahmy, 38, designs nanosystems that can interact with that chemical plant — the immune system — in different ways. For example, he’s designing nanoparticles that bait specific immune system cells by displaying certain features on their surface; inside, the nanoparticles contain an antigen of a particular microbe. The nanoparticles provide a novel way to vaccinate people against a particular pathogen, such as, say, West Nile virus, without using the pathogen itself. For another project, his lab is creating nanosensors designed to detect whether the immune system is responding to a particular therapy.

Long before moving to Yale, Fahmy earned a bachelor’s degree in chemical engineering at the University of Delaware. He landed a job at DuPont’s Experimental Station doing research on polymers such as polytetrafluoroethylene (better known as Teflon). After a few years, DuPont transferred Fahmy to a manufacturing plant in Parkersburg, West Virginia. He was less interested in the work there and felt geographically isolated.

Around that time, Fahmy’s father was diagnosed with lymphoma. Fahmy’s training as an engineer left him ill-prepared to understand his father’s disease: “I felt helpless really.” Already interested in a job change, Fahmy began to look for graduate programs close to his family that would take on an engineer interested in immunology. He ended up in the molecular biophysics department at Johns Hopkins University in Baltimore, Maryland.

When the time came to start his Ph.D. research, he joined the lab of physician-scientist Jonathan Schneck, professor of pathology, medicine, and oncology at Johns Hopkins School of Medicine. “He was a clinician and knew quite a bit about a lot of things, and he was dealing with this engineer who thought differently and didn’t see things the same way,” Fahmy says.

They were able to find a common interest in studying the sensitivity of T cells, one of the key enforcers of the immune system. Activated T cells, which have been exposed to a particular antigen, are more sensitive and generate an immune response to that antigen more quickly and aggressively than naive T cells. Using modeling techniques he learned as an engineer, Fahmy worked out why: “It turns out that activated cells actually have clustered receptors and naive cells don’t,” he explains. “It’s not a difference in the number of receptors, it’s spatial orientation.”

After finishing his Ph.D., Fahmy moved to Yale for a postdoc with W. Mark Saltzman in the then-new Department of Biomedical Engineering. But he didn’t leave immunology: “The guys who wrote the textbooks were here. So I [thought], I know a little bit of immunology, I know a little bit of engineering, and I really want to talk about developing systems that can modulate antigen presentation or expand T cell populations in a specific direction. And I looked up and saw these people. It was a perfect fit.”

Working in biomedical engineering requires calm and a propensity for planning, Fahmy says. “You’ve taken your field and multiplied it by two. Any kind of garbage can come out, anything that’s good can get diluted,” he says. “It’s very important that we approach this interdisciplinary area with a lot of excitement, but I think it needs to be a kind of wise excitement and directed enthusiasm.”

Kate Travis

Studying the brain’s electrical system

There used to be a widespread assumption that electroencephalography (EEG) could not be used to reliably record complex brain activity, such as that used in movement or thought, from outside the skull. Usually, physicians place electrodes directly on or inside the brain to record these subtle, complex brain waves. Jos� “Pepe” Contreras-Vidal, an associate professor of kinesiology at the University of Maryland, College Park, questioned that assumption.

As he worked on ways to extract complex neural signals from EEG readings, Contreras-Vidal discovered why it had never been done before. “People couldn’t use EEG to reconstruct movement because they were looking for the information in the wrong place. They thought very high frequencies were needed, whereas we now know that changes in the amplitude of the EEG signals in the very low frequencies are the essential components.”

His team found that the relationship between what an electrode can pick up inside the brain and what an EEG records outside the brain is fairly straightforward. They use formulas already well known to engineers to decode the external signal and identify what’s going on inside.

These advances in external EEG have enabled Contreras-Vidal and his team to develop brain-computer interfaces for people with spinal cord injuries and degenerative nerve diseases. These interfaces pick up signals in the user’s brain, bypass the damaged nerves, and allow the user to literally think his or her way through writing a letter on a computer screen or controlling a prosthetic hand.

Trained as an electrical engineer at the Instituto de Technologico Monterrey in Mexico, Contreras-Vidal’s interest in the brain began at a young age in his native Mexico. “My mom died from a brain aneurysm when I was 21,” he says. That left him curious about the human brain, an interest that took hold when he took a course on neural networks as a master’s degree student at the University of Colorado, Boulder. “I thought, this” — studying the complexity and capability of the brain — “might be a place to use what I knew in engineering to open new avenues.”

As a Ph.D. candidate in cognitive and neural systems at Boston University, he began developing large-scale models of the brain that could be used to understand neural mechanisms. A stint as a postdoc at Arizona State University sharpened his focus on movement disorders, particularly Parkinson’s disease. “By then I was combining experimental work with engineering,” he said. “It was a very productive environment.”

Combining tools and concepts from several fields is a priority for Contreras-Vidal, now 46. He was drawn to the University of Maryland partly because its neuroscience program is a collaboration involving 11 departments. “Important things happen at the intersection of knowledge,” he says. “I tell my students that if you specialize in an area, pay attention to related areas and see how best to apply them to your specialty.”

This approach helped Contreras-Vidal upset the assumptions about EEG’s capabilities. Applying engineering principles to biological questions is the essence of biomedical engineering, and Contreras-Vidal believes that classic engineering training is the best preparation. “There are principles in engineering that do not change, but neuroscience changes every day.”

Contreras-Vidal says it comes back to asking why. “Question, push boundaries, come at the problem from a different perspective. I think engineering is a good field to encourage that.”

Reported by Nancy Volkers, a science writer in Vermont. (10.1126/science.caredit.a1000121).

Print This Post Print This Post 1,731 views
Feb 03

Can electrical jolts to the brain produce Eureka moments?

Uncategorized Comments Off on Can electrical jolts to the brain produce Eureka moments?

Reported by Ed Yong in Discover, 2nd February 2010.

Image by Polpolux.

Finding those Eureka moments that allow us to solve difficult problems can be an electrifying experience, but rarely like this. Richard Chi and Allan Snyder managed to trigger moments of insight in volunteers, by using focused electric pulses to block the activity in a small part of their brains. After the pulses, people were better at solving a tricky puzzle by thinking outside the box.

This is the latest episode in Snyder’s quest to induce extraordinary mental skills in ordinary people. A relentless eccentric, Snyder has a long-lasting fascination with savants – people like Dustin Hoffman’s character in Rain Man, who are remarkably gifted at tasks like counting objects, drawing in fine detail, or memorising vast sequences of information.

Snyder thinks that everyone has these skills but they’re typically blocked by a layer of conscious thought. By stripping away that layer, using electric pulses or magnetic fields, we could theoretically release the hidden savant in all of us. Snyder has been doggedly pursuing this idea for many years, with the goal of producing a literal “thinking cap”. He has had some success across several studies, but typically involving small numbers of people.

His latest publication continues this theme. He used a “matchstick maths” challenge, where several sticks had been arranged to form Roman numerals and mathematical symbols. The player has to rearrange just one stick so the equation makes sense. There are three such puzzles and all require very different solutions, as you can see in the image below.

These problems are challenging because our experiences can blind us to new ways of thinking. Once we learn how to solve one matchstick puzzle, we try to apply the same method to the others. We find it harder to come up with answers that require different lines of thought.

Chi and Snyder got around this problem by literally giving people a jolt to the brain. They asked 60 volunteers to solve the matchstick problems while running a weak electric current across their scalp, targeting an area called the anterior temporal lobe (ATL). In one group, they used the current to increase the activity on the left ATL while reducing the activity of the right half. In the second, they swapped sides. In the third, they turned the current up slightly but rapidly brought it back to zero. In all the cases, they carefully controlled the current so that the volunteers couldn’t feel any noticeable tingling sensations.

After doing 27 variants of the first matchstick problem, where they had to change an X into a V, the volunteers had to solve a problem from the second category. And they did much better with this new problem if Chi and Snyder had enhanced their right ATL while blocking their left. After six minutes, around 60% of them had solved the puzzle. That’s three times the proportion of the other two groups, where only 20% could solve the problem. They got similar results when they tested the volunteers on puzzles from the third category.

These are intriguing experiments, but they can be easily misinterpreted. Chi and Snyder have shown that by stimulating the brain with electricity, they can successfully free the mind from mental blocks or fixed ways of thinking. Snyder quotes the economist John Maynard Keynes who said, “The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify…into every corner of our mind.”

But does this equate to “insight” or “creativity”? Andrea Kuszewski, a neuroscientist who studies creativity, says, “They aren’t actually measuring creativity. They are artificially inducing a “clear your head and start over” type of strategy. But just because you are open to new ideas doesn’t mean you’ll actually get one.”

Nor does this mean that the ATL is the source of Eureka moments. The sort of electrical stimulation that Chi and Snyder used isn’t a precise technique and it’s unlikely that the current only affected the ATL. Arne Dietrich, who studies the neuroscience of creativity at the American University of Beirut, says, “Creativity and insight do not depend on one specific brain area (the light bulb theory, as I call it).”

However, he adds, “It’s important that the duo targeted the ATL. Most other researchers have focused on a different part of the brain called the prefrontal cortex.” Indeed, other scientists have found that people with damage to the prefrontal cortex do better with the varied matchstick problems than those with everything intact. Chi and Snyder want to see if they could get even stronger effects by targeting both areas at the same time.

And what of the fact that Snyder only boosted insight by deactivating the left ATL? He writes that the right half of the brain is linked to insight and novelty. It’s involved in updating old ideas, while the left half is involved in maintaining them. Knock out the left and you let the right do its thing – it can find new ideas because it’s unrestrained by old ones.

But this veers dangerously close to the popular myth that the right brain is creative and artistic while the left is logical and deductive. In truth, virtually every complex thing we do depends on both halves of the brain, working together and complementing one another.

Kuszewski says, “For creative thinking to take place, there needs to be recruitment from both sides, not just the right. Stimulation of the right side (and inhibiting the left) is sort of like a kick in the pants, so your brain stops being so inflexible. That’s really all it does, and it’s temporary. No lasting creative effects.” Indeed, in Chi and Snyder’s experiment, the volunteers also did better at solving the third group of matchstick problems no matter which half of their ATL was shut down.

Dietrich agrees. “There are too many other studies showing the exact opposite, which we have painstakingly documented. This effect depends mostly on the type of insight problem one uses. For verbal tasks, as was the case here, it makes sense that inhibition to the left does the trick. But that can’t be generalized, at all, to insight as a whole.”

All in all, it’s an interesting study especially because it produced such a large improvement. But even Chi and Snyder admit that the results are difficult to interpret. That thinking cap is still a long way off. Dietrich says, “Non-invasive ways of facilitating insightful problem solving, if technologically refined, can be a game changer in many realms of society – think military, business, art or scientific discovery. But this is a long, long way away. The technique is messy, to say the least. So, it is best to stay grounded on this one for now.”

So for now, shooting yourself in the head with a taser isn’t going to turn you into the next Leonardo da Vinci. It might turn you into the next Justin Bieber though…

Read more in Chi, R., & Snyder, A. (2011). Facilitate Insight by Non-Invasive Brain Stimulation PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016655

Print This Post Print This Post 1,662 views
Feb 03

Dragonfly wings inspire micro wind turbine design.

Uncategorized Comments Off on Dragonfly wings inspire micro wind turbine design.

Reported by Winifred Bird in New Scientist, 02 February 2011.

The future of energy production (Image: Luis Louro/Alamy).

The way a dragonfly remains stable in flight is being mimicked to develop micro wind turbines that can withstand gale-force winds.

Micro wind turbines have to work well in light winds but must avoid spinning too fast when a storm hits, otherwise their generator is overwhelmed. To get round this problem, large turbines use either specially designed blades that stall at high speeds or computerised systems that sense wind speed and adjust the angle of the blade in response. This technology is too expensive for use with micro-scale turbines, though, because they don’t produce enough electricity to offset the cost. That’s where dragonflies come in.

As air flows past a dragonfly’s thin wings, tiny peaks on their surface create a series of swirling vortices. To find out how these vortices affect the dragonfly’s aerodynamics, aerospace engineer Akira Obata of Nippon Bunri University in Oita, Japan, filmed a model dragonfly wing as it moved through a large tank of water laced with aluminium powder. He noticed that the water flowed smoothly around the vortices like a belt running over spinning wheels, with little drag at low speeds.

Obata found that the flow of water around the dragonfly wing is the same at varying low current speeds, but, unlike an aircraft wing, its aerodynamic performance falls drastically as either water speed or the wing’s size increases. As air flow behaves in the same way as water, this would explain the insect’s stability at low speeds, Obata says.

Obata and his colleagues have used this finding to develop a low-cost model of a micro wind turbine whose 25-centimetre-long paper blades incorporate bumps like a dragonfly’s wing. In trials in which the wind speed over the blades rose from 24 to 145 kilometres per hour, the flexible blades bent into a cone instead of spinning faster. The prototype generates less than 10 watts of electricity, which would be enough to recharge cellphones or light LEDs, the researchers say.

“It’s a clever leap,” says David Alexander, a biomechanics specialist at the University of Kansas. “In some ways it’s more appropriate than using an animal wing model for an airplane. A wind turbine blade is just a wing, only it’s designed to go in tight circles.”

But Wei Shyy of the Hong Kong University of Science and Technology believes that while the dragonfly-inspired design may be more stable, it will also experience more energy loss in terms of drag.

Print This Post Print This Post 1,339 views
Feb 03

By Swapping Just a Few Key Particles, Researchers Atomically Engineer Magnets For Custom Purposes.

Uncategorized Comments Off on By Swapping Just a Few Key Particles, Researchers Atomically Engineer Magnets For Custom Purposes.

Reported by Clay Dillow, in Popular Science, 02.02.2011.

Better Magnets Through Manipulation Omegatron via Wikimedia.

In a process much like the materials science equivalent of bioengineering, researchers at the Department of Energy’s Ames Lab have figured out how to replace individual atoms in a solid magnetic compund much as biologists tweak and replace individual genes to alter organisms. The result are magnets with markedly different properties, all from swapping in a few atoms here and there.

While a few atoms doesn’t sound like much in the grand scheme of things, swapping them precisely into and out of materials is a massive chore. Metallic solids are generally highly symmetrical, with their atoms organizing themselves into a tight and highly ordered crystal lattice. The individual atoms making up any lattice influence its properties, but the researchers found that atoms at certain locations in the lattice have a disproportionate influence over such characteristics as melting point, strength, and even magnetism.

As such, the researchers hypothesized that by swapping atoms in these key locations in the lattice might allow them to drastically change its material properties. They chose to work with gadolinium-germanium, a highly symmetrical alloy in which they determined that certain sites occupied by gadolinium atoms were much more active than other sites when it came to bringing ferromagnetic order to the whole lattice.

Using a method called density functional theory, they were able to determine precisely which atoms in the alloy were good targets for experimentation. By swapping just a few gadolinium atoms in an alloy with lutetium they were able to more or less demagnetize the alloy. Substituting an equal number of lanthanum atoms into those spaces showed no significant effect, though they speculate that substituting even more lanthanum into more places would begin to impact the alloy’s magnetism as well.

Unfortunately the researchers figured out how to weaken, rather than strengthen, an existing alloy by playing with its atomic structure. But they may have peeked behind the curtain of an important branch of materials science. As supplies of rare earth elements grow rarer, researchers are struggling to find new solutions to increase magnetism in various materials to make up for potentially dwindling supplies of rare earth magnets. If we can atomically engineer magnetic solids to shed their magnetism, we might be able to engineer new atomically engineered alloys with greater magnetism, greater strength, or other favorable properties without relying on rare or environmentally harmful materials and processes.

Print This Post Print This Post 1,829 views
Feb 01

Special Session on Cellular Automata Algorithms & Architectures (CAAA 2011).

Uncategorized Comments Off on Special Session on Cellular Automata Algorithms & Architectures (CAAA 2011).

Special Session on Cellular Automata Algorithms & Architectures (CAAA 2011)

Call for Papers

As part of

The 2011 International Conference on High Performance Computing & Simulation
(HPCS 2011)

July 4 – 8, 2011
Istanbul, Turkey

(Submission Deadline:  February 14, 2011)

SCOPE AND OBJECTIVES
This second special session on Cellular Automata Algorithms & Architectures, planned to be held in the framework of the Conference on High Performance Computing & Simulation, provides an international forum for reporting on state-of-the-art in computational problems, techniques and solutions related to the implementation of cellular automata models and algorithms onto fine-grain and coarse-grain architectures.

This CAAA session strives to gather all aspects addressed in the main parts of the HPCS conference, but which lie at the intersection of the two areas. In other words, topics of interest include all aspects of Cellular Automata, but in the context of High Performance Computing and Simulation.

The Special Session topics include (but are not limited to) the following:

  • Cellular Automata Models & Algorithms
  • Local and Global Properties in Cellular Automata Networks
  • Local and Global Communications at Application Level
  • Local Neighborhood and Topology Awareness
  • Routing and Mobile Agents in Cellular Automata
  • Spatio-Temporal and Collision-based Computing Models
  • Synchronization and Related Problems
  • Fine-Grained Parallel Architectures and FPGA
  • Efficient Architectures and Implementations
  • Computational Problems in Large-Scale Models
  • Parallel Environments and Accelerators for Cellular Automata in HPC
  • Regular Graphs for Massive Interconnection Networks


PAPER SUBMISSIONS

You are invited to submit original and unpublished research papers on the above and other topics related to Cellular Automata Algorithms & Architectures.  Submitted papers must not have been published or simultaneously submitted elsewhere.  Submission should include a cover page with authors’ names, affiliation addresses, fax numbers, phone numbers, and email addresses.  Please, indicate clearly the corresponding author and include up to 6 keywords from the above list of topics and an abstract of no more than 450 words.  The full manuscript should be at most 7 pages using the two-column IEEE format.  Additional pages will be charged additional fee.  Please include page numbers on all submissions to make it easier for reviewers to provide helpful comments.  Submit a PDF copy of your full manuscript via email to the special session organizers:

Dominique Désérable:  Dominique.Deserable@insa-rennes.fr
Rolf Hoffmann:  hoffmann@informatik.tu-darmstadt.de
Genaro J. Martínez: genarojm@gmail.com

Only PDF files will be accepted.  Each paper will receive a minimum of three reviews.  Papers will be selected based on their originality, relevance, technical clarity and presentation.  Authors of accepted papers must guarantee that their papers will be registered and presented at the workshop.  Accepted papers will be published in the conference proceedings which will be made available at the time of the meeting.

If you have any questions about paper submission or the special session, please contact the organizers.

A list of selected papers of this special session is planned to be published in a special issue of the “Journal of Cellular Automata”.

Important Dates:

Full Paper Submission Deadline February 14, 2011
Notification of Acceptance March 14, 2011
Registration & Camera-Ready Manuscripts Due April 14, 2011
Conference Date July 4 – 8, 2011


WORKSHOP ORGANIZERS

Dominique Désérable

LGCGM – Laboratoire de Génie Civil & Génie Mécanique
INSA – Institut National des Sciences Appliquées
20 Avenue des Buttes de Coësmes
F – 35043 Rennes, France
Phone: +33 (0) 2 23 23 82 00
Fax:     +33 (0) 2 23 23 82 96
Email: Dominique.Deserable@insa-rennes.fr

Rolf Hoffmann

Computer Architecture Group – Department of Computer Science
Technische Universität Darmstadt
Hochschulstraße 10
D – 64289 Darmstadt, Germany
Phone: +49-(0)6151-16-3611/3606
Fax:     +49-(0)6151-16-5410
Email: hoffmann@informatik.tu-darmstadt.de

Genaro Juárez Martínez

International Center of Unconventional Computing
Bristol Institute of Technology, University of the West of England
BS16 1QY, Bristol
United Kingdom
Phone: +44-(0) 117 32 82662
Fax:  +44 (0) 117 32 83680
Email: genarojm@gmail.com
preload preload preload