Print This Post Print This Post 1,860 views
Oct 10

Reported by Neha Okhandiar, Queen Mary University of London News, 21 August 2013.

Certain types of video games can help to train the brain to become more agile and improve strategic thinking, according to scientists from Queen Mary University of London and University College London (UCL).

The researchers recruited 72 volunteers and measured their ‘cognitive flexibility’ described as a person’s ability to adapt and switch between tasks, and think about multiple ideas at a given time to solve problems.

Two groups of volunteers were trained to play different versions of a real-time strategy game called StarCraft, a fast-paced game where players have to construct and organise armies to battle an enemy. A third of the group played a life simulation video game called The Sims, which does not require much memory or many tactics. Continue reading »

Tagged with:
Print This Post Print This Post 1,509 views
Mar 18

Reported by Catherine Zandonella, in Princeton University News-Stories,  14 March 2012.

Princeton University researchers have used a novel virtual reality and brain imaging system to detect a form of neural activity underlying how the brain forms short-term memories that are used in making decisions.

By following the brain activity of mice as they navigated a virtual reality maze, the researchers found that populations of neurons fire in distinctive sequences when the brain is holding a memory. Previous research centered on the idea that populations of neurons fire together with similar patterns to each other during the memory period.

The study was performed in the laboratory of David Tank, who is Princeton’s Henry L. Hillman Professor in Molecular Biology and co-director of the Princeton Neuroscience Institute. Both Tank and Christopher Harvey, who was first author on the paper and a postdoctoral researcher at the time of the experiments, said they were surprised to discover the sequential firing of neurons. The study was published online on March 14 in the journal Nature. Continue reading »

Tagged with:
Print This Post Print This Post 1,629 views
Nov 17

Reported by Anne Trafton, MIT News Office, 15 Nov. 2011.

New computer chip models how neurons communicate with each other at synapses.

CAMBRIDGE, Mass. — For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks.

MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory.

With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology.
Continue reading »

Tagged with:
Print This Post Print This Post 1,097 views
Aug 19

Reported by Dave Mosher in Wired Science, 18 Aug. 2011.

The SyNAPSE cognitive computer chip. The central brown core “is where the action happens,” Modha said. IBM would not release detailed diagrams because the $21 million technology is still in an experimental phase and funded by DARPA.

IBM has unveiled an experimental chip that borrows tricks from brains to power a cognitive computer, a machine able to learn from and adapt to its environment.

Reactions to the computer giant’s press release about SyNAPSE, short for Systems of Neuromorphic Adaptive Plastic Scalable Electronic, have ranged from conservative to zany. Some even claim it’s IBM’s attempt to recreate a cat brain from silicon.

“Each neuron in the brain is a processor and memory, and part of a social network, but that’s where the brain analogy ends. We’re not trying to simulate a brain,” said IBM spokeswoman Kelly Sims. “We’re looking to the brain to develop a system that can learn and make sense of environments on the fly.”

The human brain is a vast network of roughly 100 billion neurons sharing 100 trillion connections, called synapses. That complexity makes for more mysteries than answers — how consciousness arises, how memories are stored and why we sleep are all outstanding questions. But researchers have learned a lot about how neurons and their connections underpin the power, efficiency and adaptability of the brain.

To get a better understanding of SyNAPSE and how it borrows from organic neural networks, Wired.com spoke with project leader Dharmendra Modha of IBM Research.

Wired.com: Why do we want computers to learn and work like brains?

Dharmendra Modha: We see an increasing need for computers to be adaptable, to develop functionality today’s computers can’t. Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.

The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.

Project leader Dharmendra Modha of IBM Research in front of a “brain wall.”

Wired.com: So, you don’t view SyNAPSE as a replacement for modern computers?

Modha: I see each system as as complementary. Modern computers are good at some things — they have been with us since ENIAC, and I think they will be with us for perpetuity — but they aren’t well-suited for learning.

A modern computer, in its elementary form, is a block of memory and a processor separated by a bus, a communication pathway. If you want to create brain-like computation, you need to emulate the states of neurons, synapses, and the interconnections amongst neurons in the memory, the axons. You have to fetch neural states from the memory, send them to the processor across the bus, update them, send them back and store them in the memory. It’s a cycle of store, fetch, update, store … and on and on.

To deliver real-time and useful performance, you have to run this cycle very, very fast. And that leads to ever-increasing clock rates. ENIAC’s was about 100 KHz. In 1978 they were 4.7 MHz. Today’s processors are about 5 GHz. If you want faster and faster clock rates, you achieve that by building smaller and smaller devices.

Wired.com: And that’s where we run into trouble, right?

Modha: Exactly. There are two fundamental problems with this trajectory. The first is that, very soon, we will hit hard physical limits. Mother nature will stop us. Memory is the next problem. As you shorten the distance between small elements, you leak current at exponentially higher rates. At some point the system isn’t useful.

So we’re saying, let’s go back a few million years instead of ENIAC. Neurons are about 10 Hz, on average. The brain doesn’t have ever-increasing clock rates. It’s a social network of neurons.

Wired.com: What do you mean by a social network?

Modha: The links between the neurons are synapses, and that’s the important thing — how is your network wired? Who are your friends, and how close are they? You can think of the brain as a massively, massively parallel distributed computation system.

Suppose that you would like to map this computation onto one of today’s computers. They’re ill-suited for this and inefficient, so we’re looking to the brain for a different approach. Let’s build something that looks like that, on a basic level, and see how well that performs. Build a massively, massively, massively parallel distributed substrate. And that means, like in the brain, bringing your memory extremely close to a processor.

It’s like an orange farm in Florida. The trees are the memory, and the oranges are bits. Each of us, we’re the neurons who consume and process them. Now, you could be collecting them and transporting them over long distances, but imagine having your own small, private orange grove. Now you don’t have to move that data over long distances to get it. And your neighbors are nearby with their orange trees. The whole paradigm is a huge sea of synapse-like memory elements. It’s an invisible layer of processing.

Wired.com: In the brain, neural connections are plastic. They change with experience. How can something hard-wired do this?

Modha: The memory holds the synapse-like state, and it can be adapted in real-time to encode correlations, associations and causality or anti-causality. There’s a saying out there, “neurons that fire together, wire together.” The firing of neurons can strengthen or weaken synapses locally. That’s how learning is affected.

Wired.com: So let’s suppose we have a scaled-up learning computer. How do you coax it do something useful for you?

Modha: This is a platform of technology that is adaptable in ubiquitous, changing environments. Like the brain, there is almost a limitless array of applications. The brain can take information from sight, touch, sound, smell and other senses and integrate them into modalities. By modalities I mean events like speech, walking and so on.

Those modalities, the entire computation, goes back to neural connections. Their strength, their location, who is and who is not talking to whom. It is possible to reconfigure some parts of this network for different purposes. Some things are universal to all organisms with a brain — the presence of an edge, textures, colors. Even learn before you’re born, you can recognize them. They’re natural.

Knowing your mother’s face, through nurture, comes later. Imagine a hierarchy of programming techniques, a social network of chip neurons that talk and can be adapted and reconfigured to carry out tasks you desire. That’s where we’d like to end up with this.

DARPA Synapse Program Plan.

Tagged with:
Print This Post Print This Post 1,401 views
Feb 03

Reported by Ed Yong in Discover, 2nd February 2010.

Image by Polpolux.

Finding those Eureka moments that allow us to solve difficult problems can be an electrifying experience, but rarely like this. Richard Chi and Allan Snyder managed to trigger moments of insight in volunteers, by using focused electric pulses to block the activity in a small part of their brains. After the pulses, people were better at solving a tricky puzzle by thinking outside the box.

This is the latest episode in Snyder’s quest to induce extraordinary mental skills in ordinary people. A relentless eccentric, Snyder has a long-lasting fascination with savants – people like Dustin Hoffman’s character in Rain Man, who are remarkably gifted at tasks like counting objects, drawing in fine detail, or memorising vast sequences of information.

Snyder thinks that everyone has these skills but they’re typically blocked by a layer of conscious thought. By stripping away that layer, using electric pulses or magnetic fields, we could theoretically release the hidden savant in all of us. Snyder has been doggedly pursuing this idea for many years, with the goal of producing a literal “thinking cap”. He has had some success across several studies, but typically involving small numbers of people.

His latest publication continues this theme. He used a “matchstick maths” challenge, where several sticks had been arranged to form Roman numerals and mathematical symbols. The player has to rearrange just one stick so the equation makes sense. There are three such puzzles and all require very different solutions, as you can see in the image below.

These problems are challenging because our experiences can blind us to new ways of thinking. Once we learn how to solve one matchstick puzzle, we try to apply the same method to the others. We find it harder to come up with answers that require different lines of thought.

Chi and Snyder got around this problem by literally giving people a jolt to the brain. They asked 60 volunteers to solve the matchstick problems while running a weak electric current across their scalp, targeting an area called the anterior temporal lobe (ATL). In one group, they used the current to increase the activity on the left ATL while reducing the activity of the right half. In the second, they swapped sides. In the third, they turned the current up slightly but rapidly brought it back to zero. In all the cases, they carefully controlled the current so that the volunteers couldn’t feel any noticeable tingling sensations.

After doing 27 variants of the first matchstick problem, where they had to change an X into a V, the volunteers had to solve a problem from the second category. And they did much better with this new problem if Chi and Snyder had enhanced their right ATL while blocking their left. After six minutes, around 60% of them had solved the puzzle. That’s three times the proportion of the other two groups, where only 20% could solve the problem. They got similar results when they tested the volunteers on puzzles from the third category.

These are intriguing experiments, but they can be easily misinterpreted. Chi and Snyder have shown that by stimulating the brain with electricity, they can successfully free the mind from mental blocks or fixed ways of thinking. Snyder quotes the economist John Maynard Keynes who said, “The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify…into every corner of our mind.”

But does this equate to “insight” or “creativity”? Andrea Kuszewski, a neuroscientist who studies creativity, says, “They aren’t actually measuring creativity. They are artificially inducing a “clear your head and start over” type of strategy. But just because you are open to new ideas doesn’t mean you’ll actually get one.”

Nor does this mean that the ATL is the source of Eureka moments. The sort of electrical stimulation that Chi and Snyder used isn’t a precise technique and it’s unlikely that the current only affected the ATL. Arne Dietrich, who studies the neuroscience of creativity at the American University of Beirut, says, “Creativity and insight do not depend on one specific brain area (the light bulb theory, as I call it).”

However, he adds, “It’s important that the duo targeted the ATL. Most other researchers have focused on a different part of the brain called the prefrontal cortex.” Indeed, other scientists have found that people with damage to the prefrontal cortex do better with the varied matchstick problems than those with everything intact. Chi and Snyder want to see if they could get even stronger effects by targeting both areas at the same time.

And what of the fact that Snyder only boosted insight by deactivating the left ATL? He writes that the right half of the brain is linked to insight and novelty. It’s involved in updating old ideas, while the left half is involved in maintaining them. Knock out the left and you let the right do its thing – it can find new ideas because it’s unrestrained by old ones.

But this veers dangerously close to the popular myth that the right brain is creative and artistic while the left is logical and deductive. In truth, virtually every complex thing we do depends on both halves of the brain, working together and complementing one another.

Kuszewski says, “For creative thinking to take place, there needs to be recruitment from both sides, not just the right. Stimulation of the right side (and inhibiting the left) is sort of like a kick in the pants, so your brain stops being so inflexible. That’s really all it does, and it’s temporary. No lasting creative effects.” Indeed, in Chi and Snyder’s experiment, the volunteers also did better at solving the third group of matchstick problems no matter which half of their ATL was shut down.

Dietrich agrees. “There are too many other studies showing the exact opposite, which we have painstakingly documented. This effect depends mostly on the type of insight problem one uses. For verbal tasks, as was the case here, it makes sense that inhibition to the left does the trick. But that can’t be generalized, at all, to insight as a whole.”

All in all, it’s an interesting study especially because it produced such a large improvement. But even Chi and Snyder admit that the results are difficult to interpret. That thinking cap is still a long way off. Dietrich says, “Non-invasive ways of facilitating insightful problem solving, if technologically refined, can be a game changer in many realms of society – think military, business, art or scientific discovery. But this is a long, long way away. The technique is messy, to say the least. So, it is best to stay grounded on this one for now.”

So for now, shooting yourself in the head with a taser isn’t going to turn you into the next Leonardo da Vinci. It might turn you into the next Justin Bieber though…

Read more in Chi, R., & Snyder, A. (2011). Facilitate Insight by Non-Invasive Brain Stimulation PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016655

Tagged with:
preload preload preload