Print This Post Print This Post 1,304 views
Aug 30

Reported by PhysOrg, 29 Aug. 2011.

Is it time for a communications paradigm shift? Scientists calculate that encoding and sending information via electron spin, instead of voltage changes, may mean tiny chips could transmit more information and consume less power.

Sending information by varying the properties of electromagnetic waves has served humanity well for more than a century, but as our steadily shrink, the signals they carry can bleed across wires and interfere with each other, presenting a barrier to further size reductions. A possible solution could be to encode ones and zeros, not with voltage, but with electron spin, and researchers have now quantified some of the benefits this fresh approach might yield.

In a paper in the AIP’s journal , a team from the University of Rochester and the University of Buffalo has proposed a new communications scheme that would use silicon wires carrying a constant current to drive electrons from a transmitter to a receiver. By changing its magnetization, a contact would inject electron spin (either up or down) into the current at the transmitter end.

Over at the receiver end, a magnet would separate the current based on the spin, and a logic device would register either a one or a zero. The researchers chose because silicon’s electrons hold onto their spin for longer than other semiconductors. The team calculated the bandwidth and of a model spin-communication circuit, and found it would transmit more information and use less power than circuits using existing techniques.

The researchers did find that the latency, or the time it takes information to travel from transmitter to receiver, was longer for the spin-communication circuit, but its other benefits mean the new scheme may one day shape the design of many emerging technologies.

More information: “Silicon spin communication” by Hanan Dery et al. is published in Applied Physics Letters, as well as in arxiv.org

Tagged with:
Print This Post Print This Post 1,657 views
Aug 29

Reported by Savvas Chatzichristofis, 25 Aug. 2011.

Authors: Savvas A. Chatzichristofis, Yiannis S. Boutalis

This book covers the state of the art in image indexing and retrieval techniques paying particular attention in recent trends and applications. It presents the basic notions and tools of content-based image description and retrieval, covering all significant aspects of image preprocessing, features extraction, similarity matching and evaluation methods. Particular emphasis is given in recent computational intelligence techniques for producing compact content based descriptors comprising color, texture and spatial distribution information. Early and late fusion techniques are also used for improving retrieval results from large probably distributed inhomogenous databases. The book reports on the basic international standards and provides an updated presentation of the current retrieval systems. Numerous utilities and techniques are implemented in software, which is provided as a supplementary material under an open-source license agreement. The book is particularly useful for postgraduate students and researchers in the field of image retrieval, who want to easily elaborate and test state of the art techniques and possibly incorporate it in their development.

ISBN-13: 978-3-639-37391-2
ISBN-10: 363937391X
EAN: 9783639373912
Book language: English
Publishing house: VDM Verlag Dr. Müller
Website: http://www.vdm-verlag.de/
Number of pages: 216
Published at: 2011-08-24
Category: Informatics, IT
Price: 79.00 €
More info: More Books or Amazon
Tagged with:
Print This Post Print This Post 1,212 views
Aug 28

Reported by Zoë Corbyn, in Nature News, 19 Aug. 2011.

The creator of a marketplace for scientific research explains how it could transform the enterprise.

Elizabeth Iorns.

Last week, Science Exchange in Palo Alto, California, launched a website allowing scientists to outsource their research to ‘providers’ — other researchers and institutions that have the facilities and equipment to meet requesting scientists’ needs. Nature asked the company’s co-founder, researcher-turned-entrepreneur Elizabeth Iorns, how the website works, and what an online marketplace for experiments could mean for the future of research.

What is Science Exchange?

It is an online marketplace for scientific experiments. Imagine eBay, but for scientific knowledge. You post an experiment that you want to outsource, and scientific service providers submit bids to do the work. The goal is to make scientific research more efficient by making it easy for researchers to access experimental expertise from core facilities with underutilized capacity.

Where did the idea come from?

It was through my work as a breast-cancer biologist at the University of Miami in Florida. I wanted to conduct some experiments outside my field, and realized that I needed an external provider. What followed was an entirely frustrating process, and when I found the provider it was difficult to pay them because they were outside my university’s purchasing system. When I talked to other scientists, it became clear that this was a really big problem, but also one that could be solved with a marketplace. Development of the website started around a kitchen table in Miami in April.

Why would researchers want to participate?

So they can access technologies that their university doesn’t offer; if their own institutional facilities are too busy; if they just generally want to speed up the research process; or if they want a good deal. Prices can vary dramatically: for example, through our platform I have seen bids to perform a microRNA study ranging from US$3,500 to $9,000. Those who do the work can also build reputations independent of their publications by gaining feedback from those they work with.

Why might universities want their facilities to participate?

There are huge budget incentives. It allows institutions to make the most of their existing facilities, which means that they don’t have to subsidize them as much. Also, if researchers can use Science Exchange to access the latest equipment, institutions can be more flexible about when they buy new instruments.

How are you intending to make a profit?

We take a small commission if we match a researcher with a provider and they use us to do the transaction — 5% for projects under $5,000, which is tiny in comparison with what researchers can save by examining prices from multiple providers. For projects costing more than $5,000, it is a lower commission and a sliding scale: we aren’t going to charge $50,000 on a $1-million experiment.

How are you funded?

By Ycombinator, a start-up accelerator programme in Mountain View, California, and angel investors. We have raised $320,000 so far and are looking to raise another $1 million. We have big plans to expand.

What has the response been like?

We launched after a short beta period and the growth is crazy. We now have close to 1,000 scientists using our site and 50–100 signing up every day. More than 70 institutions have providers registered with us, including Stanford University in California, Harvard University in Cambridge, Massachusetts, and, of course, Miami.

Is the service limited to particular regions of the world?

Anyone from anywhere can use it. We had initially thought our focus would be in the United States but we have had a lot of interest from overseas researchers, particularly interest in the facilities that are available here.

Are any similar services available?

We are the first. Directories of core facilities do exist, but they aren’t exhaustive, and they require the researcher to contact the providers themselves. There are other outsourcing services — such as Amazon’s website Mechanical Turk — but they are for unspecialized tasks. Science Exchange providers must have access to specialized equipment and be highly trained.

Where do you hope to be five years from now?

Having created this global marketplace, we want to see an even broader range of experiments outsourced beyond just core facilities. Scientists should be spending their grant money in a way that allows them to specialize and make use of each others’ specializations. Imagine getting paid to do other people’s experiments one day a week, and then using that money to bootstrap your own research.

How disruptive could this be to the way science is done?

Totally disruptive. It could transform the current very inefficient use of funds and dramatically change the way in which scientists do research.

Tagged with:
Print This Post Print This Post 882 views
Aug 27

Reported by Danielle Venton, in Wired Science, 26 Aug. 2011

Some worker ants are more equal than others. s.alt/Flickr.

Some worker ants are more equal than others.

As with other social insects, it was once thought that workers were essentially equivalent in ant colony hierarchies. But it appears that a few well-informed individuals shape group decisions by leading nestmates to new homes.

The findings could add a new dimension to ant-derived models of self-organization.

“Although self-organized systems appear very effective under the assumption that all individuals follow the same simple set of rules, the presence of key, well-informed individuals altering their behavior according to their prior experience might generally enhance performance even further,” wrote biologists from the University of Bristol and the University of Toulouse in an Aug. 24 Journal of Experimental Biology paper.

To study nest-hunting, Nathalie Stroeymeyt and colleagues Nigel Franks and Martin Giurfa collected “house-hunting” ants, or Temnothorax albipennis, from the southern coast of the United Kingdom. These small, light-brown ants make simple sand-enclosed nests in the cracks of rocks.

The 'house hunting' ant, Temnothorax albipennis. Image: Wikimedia.org.

Moving the ants into the lab, Stroeymeyt gave them a well-supplied artificial nest. She then placed an identical empty nest site at the opposite end of the ants’ territory. Each ant’s back was painted with individually-identifiable colored spots. Webcams and motion-detection software allowed the researchers to keep track of the movements of specific ants.

One week later Stroeymeyt placed a second unfamiliar nest site in the territory and destroyed their original home. Though some ants began to run around randomly in all directions, a few ants who had already explored the alternate nest site headed directly to it.

Those ants then quickly returned to the destroyed nest to recruit followers. They repeated the process until enough had gathered at the new nest site to relocate the entire colony.

Most studies of how ants find new nest sites use colonies unfamiliar with a new territory, and assume that all workers follow the same rules. But that’s not realistic, and as a model for self-organization and distributed decision-making — ants have inspired various forms of traffic coordination, from cars to data — it might not be optimally efficient.

“This begins to change how we think about self-organization,” said Nicola Plowes, a behavioral ecologist and ant specialist at Arizona State University, who was not involved in the research. “Informed individuals making those decisions actually result in a process that is more efficient than a simple homogeneous self-organized system.”

The findings will be exciting for technologists and mathematicians who use insect-based algorithms, Plowes believes.

“Sky Harbor International Airport, for example, uses ant-based algorithms for its baggage carriers,” she said. “Knowing that we can incorporate informed individuals, you can actually make it work better and faster.”

Tagged with:
Print This Post Print This Post 1,234 views
Aug 26

Reported by Bill Hathaway, Yale University, 25 Aug. 2011.

Yale University researchers have successfully re-engineered the protein-making machinery in bacteria, a technical tour de force that promises to revolutionize the study and treatment of a variety of diseases.

“Essentially, we have expanded the genetic code of E. coli, which allows us synthesize special forms of proteins that can mimic natural or disease states,” said Jesse Rinehart of the Department of Cellular and Molecular Physiology and co-corresponding author of the research published in the August 26 issue of the journal Science.

Since the structure of DNA was revealed in the 1950s, scientists have been working hard to understand the nature of the genetic code. Decades of research and recent advances in the field of synthetic biology have given researchers the tools to modify the natural genetic code within organisms and even rewrite the universal recipe for life.

“What we have done is taken synthetic biology and turned it around to give us real biology that has been synthesized,” Rinehart explained.

The Yale team — under the direction of Dieter Söll, Sterling Professor of Molecular Biophysics and Biochemistry, professor of chemistry and corresponding author of the paper — developed a new way to influence the behavior of proteins, which carry out almost all of life’s functions. Instead of creating something new in nature, the researchers essentially induced phosphorylation, a fundamental process that occurs in all forms of life and can dramatically change a protein’s function. The rules for protein phosphorylation are not directly coded in the DNA but instead occur after the protein is made. The Yale researchers fundamentally rewrote these rules by expanding the E. coli genetic code to include phosphoserine, and for the first time directed protein phosphorylation via DNA.

This new technology now enables the production of human proteins with their naturally occurring phosphorylation sites, a state crucial to understanding disease processes. Previously, scientists lacked the ability to study proteins in their phosphorylated or active state. This has hindered research in diseases such as cancer, which is marked by damagingly high levels of protein activation.

“What we are doing is playing with biological switches — turning proteins on or off — which will give us a completely new way to study disease states and hopefully guide the discovery of new drugs,” Rinehart said.

“We had to give some very ancient proteins a few modern upgrades,” Söll said.

Söll and Rinehart now are attempting to create proteins in states known to be linked to cancer, type 2 diabetes, and hypertension. Both men, however, stressed the technique can be done for any type of protein.

“Dr. Söll and his colleagues have provided researchers with a powerful new tool to use in uncovering how cells regulate a broad range of processes, including cell division, differentiation and metabolism,” said Michael Bender, who oversees protein synthesis grants at the National Institute of General Medical Sciences of the National Institutes of Health.

Other authors from Yale are lead authors Hee-Sung Park and Michael J. Hohn, Takuya Umehara and L-Tao Guo. They collaborated with Edith M. Osborne, Jack Benner, and Christopher J. Noren from New England Biolabs.

The work was funded by grants from the National Science Foundation and the National Institutes of Health via the National Institute of General Medical Sciences and the National Institute of Diabetes and Digestive and Kidney Diseases.

Tagged with:
Print This Post Print This Post 1,445 views
Aug 25

Reported by Brandon Keim, in Wired Science, 22 Aug. 2011.

Reconstructed cross-section image of a foil ball approximately 4 inches in diameter. (Menon & Cambou/PNAS)

Take a piece of paper. Crumple it. Before you sink a three-pointer in the corner wastebasket, consider that you’ve just created an object of extraordinary mathematical and structural complexity, filled with mysteries that physicists are just starting to unfold.

“Crush a piece of typing paper into the size of a golf ball, and suddenly it becomes a very stiff object. The thing to realize is that it’s 90 percent air, and it’s not that you designed architectural motifs to make it stiff. It did it itself,” said physicist Narayan Menon of the University of Massachusetts Amherst. “It became a rigid object. This is what we are trying to figure out: What is the architecture inside that creates this stiffness?”

Menon’s expedition into the shadowy heart of a crumpled sheet — of aluminum foil, to be precise — was undertaken with fellow Amherst physicist Anne Dominique Cambou and published in an August 23 Proceedings of the National Academy of Sciences article. The pair think they’ve mapped the mathematical underpinnings of its rigidity.

The geometry of a conically distorted sheet of paper, painted and viewed through cross-polarized lenses that reveal subtle variations in wavelengths of reflected light. Cerda et al./Nature.

The geometry of a conically distorted sheet of paper, painted and viewed through cross-polarized lenses that reveal subtle variations in wavelengths of reflected light. Cerda et al./Nature.

Of course, it may seem surprising that a balled-up sheet of paper or foil should contort itself beyond knowledge. But Menon noted that when physicists finally described the precise dynamics of conical crumpling, which you can achieve by laying a sheet of paper over a coffee cup and poking down with one finger, it was regarded as a mathematical tour-de-force.

A crumpled cone is a far simpler example of the tendencies that produce a crumpled ball: when a flat plane is subjected to distortional stress but only permitted to bend, not stretch, it transforms suddenly and unpredictably into a landscape of folds and facets, each representing an entirely new surface. It’s what researchers call a “far from equilibrium” process, guided by strange rules and non-linear effects. The mechanics of an individual crease are understood, but when physicists try to predict where that crease will appear or how it will influence the next, understanding goes dim.

Trying to peer inside a crumpled ball by simulating the process in three dimensions is “mathematically nasty,” a problem that quickly pushes lab-grade computers to their limits, said Menon. And trying to reverse-engineer structure from patterns revealed upon unfolding just isn’t possible. What happens in a crumpled ball stays in a crumpled ball.

‘I love it that these simple-looking problems are so nasty sometimes.’

“If you’re not talking about simulation, but mathematical understanding of these things, that’s one step harder,” said Menon. “We understand the underlying equations of the mechanics of a thin sheet very well. Those have been around for a century. But solving those equations, to produce a physical understanding, is difficult even in simple cases. If you’re talking about a structure that owes its properties to 1,000 or more of these structures, interacting in complicated ways, that’s asking more than we can do now.”

To look into crumpled balls, Menon and Cambou used X-ray microtomography, an imaging technique that, like a medical CT scan, assembles three-dimensional images from thousands of two-dimensional, cross-section snapshots. They imaged dozens of balls of different sizes, searching for statistical patterns in their internal geometries.

Internal snapshot of a simulated crumpled plastic sheet. Tallinen et al./Nature

They found that a crumpled ball is most dense in its outer regions, and least dense in its core. Once inside its folds, there’s no way of knowing from their shape which direction is out and which is in (as, for example, one can determine from an onion, which has layers of skin arranged in curves parallel to its outer surface.) “If I was a creature that lived inside this ball, could I make my way out by looking at the way things are arranged? The answer is no,” said Menon.

When he and Cambou studied arrangements of creases and folds, they found a distinctive pattern. Planes often lie flat against other planes. “It’s a fairly uniform object, though you’ve created it by a random, not-so-uniform process,” said Menon. “That’s the most surprising thing. There is no real geometrical reason why things should stack and layer in that way.” But if the researchers don’t know why this happens, they can speculate as to its effect: strength.

Multiple layers of a thin sheet soon become walls. Per the lack-of-orientation observation, these walls are aligned in thousands of random directions. Press down and, from any angle, you’re pressing against down columns. “It can resist being crushed in all different directions,” said Menon.

To explore why this happens, he and Cambou are now using transparent plastic sheets to make three-dimensional movies of crumpling. The implications extend far beyond Menon’s lab. “You’ve heard of crumple zones,” he said. “I’m just as interested in understanding leaves, or thin membranes of animal tissue, or the conformation of the Earth’s crust when it’s folded into mountains. I love it that these simple-looking problems are so nasty sometimes.”

Tagged with:
Print This Post Print This Post 1,332 views
Aug 23

Reported by ScienceDaily, 16 Aug. 2011.

By combining two frontier technologies, spintronics and straintronics, a team of researchers from Virginia Commonwealth University has devised perhaps the world’s most miserly integrated circuit. Their proposed design runs on so little energy that batteries are not even necessary; it could run merely by tapping the ambient energy from the environment. Rather than the traditional charge-based electronic switches that encode the basic 0s and 1s of computer lingo, spintronics harnesses the natural spin — either up or down — of electrons to store bits of data.

Spin one way and you get a 0; switch the spin the other way — typically by applying a magnetic field or by a spin-polarized current pulse — and you get a 1. During switching, spintronics uses considerably less energy than charge-based electronics. However, when ramped up to usable processing speeds, much of that energy savings is lost in the mechanism through which the energy from the outside world is transferred to the magnet.

The solution, as proposed in the AIP’s journal Applied Physics Letters, is to use a special class of composite structure called multiferroics. These composite structures consist of a layer of piezoelectric material with intimate contact to a magnetostrictive nanomagnet (one that changes shape in response to strain). When a tiny voltage is applied across the structure, it generates strain in the piezoelectric layer, which is then transferred to the magnetostrictive layer. This strain rotates the direction of magnetism, achieving the flip. With the proper choice of materials, the energy dissipated can be as low as 0.4 attojoules, or about a billionth of a billionth of a joule. This proposed design would create an extremely low-power, yet high-density, non-volatile magnetic logic and memory system.

The processors would be well suited for implantable medical devices and could run on energy harvested from the patient’s body motion. They also could be incorporated into buoy-mounted computers that would harvest energy from sea waves, among other intriguing possibilities.

Reference: Kuntal Roy, Supriyo Bandyopadhyay, Jayasimha Atulasimha. Hybrid spintronics and straintronics: A magnetic technology for ultra low energy computing and signal processing. Applied Physics Letters, 2011; 99 (6): 063108 DOI: 10.1063/1.3624900

Tagged with:
Print This Post Print This Post 1,413 views
Aug 22

Reported by Liz Ahlberg, Physical Sciences Editor, in University of Illinois News Bureau, 11 Aug. 2011.

An ultrathin, electronic patch with the mechanics of skin, applied to the wrist for EMG and other measurements. skin tatoo. Photo courtesy John Rogers

Engineers have developed a device platform that combines electronic components for sensing, medical diagnostics, communications and human-machine interfaces, all on an ultrathin skin-like patch that mounts directly onto the skin with the ease, flexibility and comfort of a temporary tattoo.

Led by John A. Rogers, the Lee J. Flory-Founder professor of engineering at the University of Illinois, the researchers described their novel skin-mounted electronics in the Aug. 12 issue of the journal Science.

The circuit bends, wrinkles and stretches with the mechanical properties of skin. The researchers demonstrated their concept through a diverse array of electronic components mounted on a thin, rubbery substrate, including sensors, LEDs, transistors, radio frequency capacitors, wireless antennas, and conductive coils and solar cells for power.

“We threw everything in our bag of tricks onto that platform, and then added a few other new ideas on top of those, to show that we could make it work,” said Rogers, a professor of materials science and engineering, of chemistry, of mechanical science and engineering, of bioengineering and of electrical and computer engineering. He also is affiliated with the Beckman Institute for Advanced Science and Technology, and with the Frederick Seitz Materials Research Laboratory at U. of I.

The circuits’ filamentary serpentine shape allows them to bend, twist, scrunch and stretch while maintaining functionality. Photo courtesy John Rogers

The patches are initially mounted on a thin sheet of water-soluble plastic, then laminated to the skin with water – just like applying a temporary tattoo. Alternately, the electronic components can be applied directly to a temporary tattoo itself, providing concealment for the electronics.

“We think this could be an important conceptual advance in wearable electronics, to achieve something that is almost unnoticeable to the wearer,” said U. of I. electrical and computer engineering professor Todd Coleman, who co-led the multi-disciplinary team. “The technology can connect you to the physical world and the cyberworld in a very natural way that feels very comfortable.”

Skin-mounted electronics have many biomedical applications, including EEG and EMG sensors to monitor nerve and muscle activity. One major advantage of skin-like circuits is that they don’t require conductive gel, tape, skin-penetrating pins or bulky wires, which can be uncomfortable for the user and limit coupling efficiency. They are much more comfortable and less cumbersome than traditional electrodes and give the wearers complete freedom of movement.

“If we want to understand brain function in a natural environment, that’s completely incompatible with EEG studies in a laboratory,” said Coleman, now a professor at the University of California at San Diego. “The best way to do this is to record neural signals in natural settings, with devices that are invisible to the user.”

Monitoring in a natural environment during normal activity is especially beneficial for continuous monitoring of health and wellness, cognitive state or behavioral patterns during sleep.

In addition to gathering data, skin-mounted electronics could provide the wearers with added capabilities. For example, patients with muscular or neurological disorders, such as ALS, could use them to communicate or to interface with computers. The researchers found that, when applied to the skin of the throat, the sensors could distinguish muscle movement for simple speech. The researchers have even used the electronic patches to control a video game, demonstrating the potential for human-computer interfacing.

Rogers’ group is well known for its innovative stretchable, flexible devices, but creating devices that could comfortably contort with the skin required a new fabrication paradigm.

“Our previous stretchable electronic devices are not well-matched to the mechanophysiology of the skin,” Rogers said. “In particular, the skin is extremely soft, by comparison, and its surface can be rough, with significant microscopic texture. These features demanded different kinds of approaches and design principles.”

Rogers collaborated with Northwestern University engineering professor Yonggang Huang and his group to tackle the difficult mechanics and materials questions. The team developed a device geometry they call filamentary serpentine, in which the circuits for the various devices are fabricated as tiny, squiggled wires. When mounted on thin, soft rubber sheets, the wavy, snakelike shape allows them to bend, twist, scrunch and stretch while maintaining functionality.

“The blurring of electronics and biology is really the key point here,” Huang said. “All established forms of electronics are hard, rigid. Biology is soft, elastic. It’s two different worlds. This is a way to truly integrate them.”

The researchers used simple adaptations of techniques used in the semiconductor industry, so the patches are easily scalable and manufacturable. The device company mc10, which Rogers co-founded, already is working to commercialize certain versions of the technology.

Next, the researchers are working to integrate the various devices mounted on the platform so that they work together as a system, rather than individually functioning devices, and to add Wi-Fi capability.

“The vision is to exploit these concepts in systems that have self-contained, integrated functionality, perhaps ultimately working in a therapeutic fashion with closed feedback control based on integrated sensors, in a coordinated manner with the body itself,” Rogers said.

The National Science Foundation and the Air Force Research Laboratory supported this work.

Tagged with:
Print This Post Print This Post 1,343 views
Aug 21

Reported by Veronique Greenwood in Discover Magazine, 18th August2011.

Pills... by psyberartist / flickr

What’s the News: For all the testing we do, drugs are still mysterious things—they can activate pathways we never connected with them or twiddle the dials in some far-off part of the body. To see if drugs already FDA-approved for certain diseases could be used to treat other conditions, scientists lined up two online databases and discovered two drugs that, when tested in mice, worked against diseases they’d never been meant for, suggesting that mining of such information could be a fertile strategy for finding new treatments.

How the Heck:

What’s the Context:

  • Searching FDA-approved drug databases for effects that can be brought to bear on other illnesses isn’t that unusual in chemistry. Many scientists begin studies this way.
  • But what’s nice about this study is that one of the databases, the Omnibus, is crowdsourced: researchers have been adding information to it, bit by bit, for decades, and it’s available for free. Generally, free databases that have accreted over time aren’t considered the most reliable datasets, but as this study shows, they can get the job done.
  • Having the two databases pull from each other is a nice touch as well—most studies are just looking to work on a single, specific disease, but here, any combination of drug and disease is up for investigation.

Not So Fast: These particular drugs would need quite a bit more testing to see if they could be useful for these illnesses in humans. As one computational chemical biologist said to ScienceNOW, “Topiramate hits quite a lot of targets and has complex side effects, while the doses needed for functional effects for cimetidine seemed high,” though he still praised the study’s goals: “This is a really important concept; it is almost like they are looking for an antidote to a disease.”

The Future Holds: Unfortunately, through a quirk of the incentive system in pharmaceuticals, it’s unlikely that companies that first developed these drugs will invest the time and money required to test them for new uses: their patents have expired, so the companies don’t stand to profit from it. But perhaps drugs still under patent, or drugs just beginning to be tested, could be explored this way. With new drugs few and far between these days, re-purposing old ones could be a way for drug companies to fund further research.

Reference: Dudley et al. Computational Repositioning of the Anticonvulsant Topiramate for Inflammatory Bowel Disease. Science Translational Medicine. 17 August 2011: Vol. 3, Issue 96, p. 96ra76 DOI: 10.1126/scitranslmed.3002648

Sirota et al. Discovery and Preclinical Validation of Drug Indications Using Compendia of Public Gene Expression Data. Science Translational Medicine. 17 August 2011: Vol. 3, Issue 96, p. 96ra77. DOI: 10.1126/scitranslmed.3001318

Tagged with:
Print This Post Print This Post 1,054 views
Aug 19

Reported by Dave Mosher in Wired Science, 18 Aug. 2011.

The SyNAPSE cognitive computer chip. The central brown core “is where the action happens,” Modha said. IBM would not release detailed diagrams because the $21 million technology is still in an experimental phase and funded by DARPA.

IBM has unveiled an experimental chip that borrows tricks from brains to power a cognitive computer, a machine able to learn from and adapt to its environment.

Reactions to the computer giant’s press release about SyNAPSE, short for Systems of Neuromorphic Adaptive Plastic Scalable Electronic, have ranged from conservative to zany. Some even claim it’s IBM’s attempt to recreate a cat brain from silicon.

“Each neuron in the brain is a processor and memory, and part of a social network, but that’s where the brain analogy ends. We’re not trying to simulate a brain,” said IBM spokeswoman Kelly Sims. “We’re looking to the brain to develop a system that can learn and make sense of environments on the fly.”

The human brain is a vast network of roughly 100 billion neurons sharing 100 trillion connections, called synapses. That complexity makes for more mysteries than answers — how consciousness arises, how memories are stored and why we sleep are all outstanding questions. But researchers have learned a lot about how neurons and their connections underpin the power, efficiency and adaptability of the brain.

To get a better understanding of SyNAPSE and how it borrows from organic neural networks, Wired.com spoke with project leader Dharmendra Modha of IBM Research.

Wired.com: Why do we want computers to learn and work like brains?

Dharmendra Modha: We see an increasing need for computers to be adaptable, to develop functionality today’s computers can’t. Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.

The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.

Project leader Dharmendra Modha of IBM Research in front of a “brain wall.”

Wired.com: So, you don’t view SyNAPSE as a replacement for modern computers?

Modha: I see each system as as complementary. Modern computers are good at some things — they have been with us since ENIAC, and I think they will be with us for perpetuity — but they aren’t well-suited for learning.

A modern computer, in its elementary form, is a block of memory and a processor separated by a bus, a communication pathway. If you want to create brain-like computation, you need to emulate the states of neurons, synapses, and the interconnections amongst neurons in the memory, the axons. You have to fetch neural states from the memory, send them to the processor across the bus, update them, send them back and store them in the memory. It’s a cycle of store, fetch, update, store … and on and on.

To deliver real-time and useful performance, you have to run this cycle very, very fast. And that leads to ever-increasing clock rates. ENIAC’s was about 100 KHz. In 1978 they were 4.7 MHz. Today’s processors are about 5 GHz. If you want faster and faster clock rates, you achieve that by building smaller and smaller devices.

Wired.com: And that’s where we run into trouble, right?

Modha: Exactly. There are two fundamental problems with this trajectory. The first is that, very soon, we will hit hard physical limits. Mother nature will stop us. Memory is the next problem. As you shorten the distance between small elements, you leak current at exponentially higher rates. At some point the system isn’t useful.

So we’re saying, let’s go back a few million years instead of ENIAC. Neurons are about 10 Hz, on average. The brain doesn’t have ever-increasing clock rates. It’s a social network of neurons.

Wired.com: What do you mean by a social network?

Modha: The links between the neurons are synapses, and that’s the important thing — how is your network wired? Who are your friends, and how close are they? You can think of the brain as a massively, massively parallel distributed computation system.

Suppose that you would like to map this computation onto one of today’s computers. They’re ill-suited for this and inefficient, so we’re looking to the brain for a different approach. Let’s build something that looks like that, on a basic level, and see how well that performs. Build a massively, massively, massively parallel distributed substrate. And that means, like in the brain, bringing your memory extremely close to a processor.

It’s like an orange farm in Florida. The trees are the memory, and the oranges are bits. Each of us, we’re the neurons who consume and process them. Now, you could be collecting them and transporting them over long distances, but imagine having your own small, private orange grove. Now you don’t have to move that data over long distances to get it. And your neighbors are nearby with their orange trees. The whole paradigm is a huge sea of synapse-like memory elements. It’s an invisible layer of processing.

Wired.com: In the brain, neural connections are plastic. They change with experience. How can something hard-wired do this?

Modha: The memory holds the synapse-like state, and it can be adapted in real-time to encode correlations, associations and causality or anti-causality. There’s a saying out there, “neurons that fire together, wire together.” The firing of neurons can strengthen or weaken synapses locally. That’s how learning is affected.

Wired.com: So let’s suppose we have a scaled-up learning computer. How do you coax it do something useful for you?

Modha: This is a platform of technology that is adaptable in ubiquitous, changing environments. Like the brain, there is almost a limitless array of applications. The brain can take information from sight, touch, sound, smell and other senses and integrate them into modalities. By modalities I mean events like speech, walking and so on.

Those modalities, the entire computation, goes back to neural connections. Their strength, their location, who is and who is not talking to whom. It is possible to reconfigure some parts of this network for different purposes. Some things are universal to all organisms with a brain — the presence of an edge, textures, colors. Even learn before you’re born, you can recognize them. They’re natural.

Knowing your mother’s face, through nurture, comes later. Imagine a hierarchy of programming techniques, a social network of chip neurons that talk and can be adapted and reconfigured to carry out tasks you desire. That’s where we’d like to end up with this.

DARPA Synapse Program Plan.

Tagged with:
preload preload preload