Print This Post Print This Post 1,496 views
Jan 31

Reported by Jermey N. A. Matthews in Physics Today, January 2011

Businesses, nonprofit organizations, and the White House are betting on K–12 STEM teachers to forestall the “gathering storm” forecasted by the National Academies.

For US Education Secretary Arne Duncan, the 2009 Program for International Student Assessment (PISA) scores, released last month, were a wakeup call. Once again, US high-school students ranked near the middle of the pack in science, math, and reading, the three areas of focus for the PISA, which tested 15-year-olds in more than 75 countries. In a press release, Duncan said that a slight increase in US students’ science ranking from below average is “not much to celebrate.” For a knowledge economy, “being average in science is a mantle of mediocrity.”

An astronaut exploring Mars is shown in this drawing by a third-grade student at Randolph Elementary School in Arlington, Virginia. The student’s teacher, Matthew Tosiello, is a graduate of the Sally Ride Science Academy, which provides no-cost professional development workshops to elementary and middle-school teachers. (Image courtesy of Randolph Elementary School, VA.)

Duncan also took aim at what he felt distinguished perennial PISA front-runners South Korea and Finland from the US. “[Their practices] show clearly that America has to do much more to elevate the teaching profession, from the recruitment and training of teachers to their evaluation and professional development.”

Duncan’s response reflects the Obama administration’s support for initiatives that aim to recruit and develop K–12 math and science teachers. Last September, President Obama unveiled his administration’s plan to recruit more than 10 000 new science, technology, engineering, and math (STEM) teachers in the next two years. He also announced the launch of Change the Equation, or CTEq, a nonprofit coalition of more than 100 businesses working together on STEM education outreach.

Educate to Innovate

The teacher-recruitment campaign was inspired by a new report on STEM education from the President’s Council of Advisors on Science and Technology (PCAST) and will be coordinated through http://www.teach.gov, a Department of Education website. “Improve STEM teaching” is listed as one of the three “critical” goals of CTEq; the nonprofit coalition supports teacher-development programs such as the Sally Ride Science Academy, which gives elementary- and middle-school teachers tools for encouraging students to pursue math and science careers.

The PCAST report also urges federal support for a national master teachers corps that “recognizes, rewards, and engages the best STEM teachers and elevates the status of the profession.” Among other recommendations are that corps members’ salaries be supplemented by about $15 000 per year. The administration is taking the PCAST recommendations seriously, says physics Nobel laureate Carl Wieman, who was recently appointed associate director for science at the White House Office of Science and Technology Policy. ”Of all the  recommendations, the idea of developing more and better-trained STEM teachers is one clearly embraced by everyone, from the president on down to me.”

Physics Nobel laureate Carl Wieman, a long-time education activist in science, technology, engineering, and math, now heads the science office at the White House Office of Science and Technology Policy, which is in the process of responding to a report on STEM education from the President’s Council of Advisors on Science and Technology. (Image courtesy of the White House.)

The PCAST recommendations echo those in the 2005 National Academies’ Rising Above the Gathering Storm report, which highlighted the nation’s low international standing in math and science education. In September 2010, the Academies released an update—subtitled “Rapidly Approaching Category 5”—that praises the bipartisan passage in 2007 of the America COMPETES Act for promising more funding for science research and education (see Physics Today, September 2007, page 34). But the Gathering Storm update also exposes an educational system that is still lagging and that has produced, for the first time in US history, a generation less educated than the previous one.

The industry coalition CTEq is one of several initiatives in the Obama administration’s “Educate to Innovate” campaign, which also includes an annual White House Science Fair and National Lab Day, an annual celebration preceded by year-round collaborations of citizens with STEM teachers on classroom projects. Teacher recruitment and development programs are supported by CTEq members through funds or volunteers. For example, Agilent Technologies, Amgen, and Bayer fund the NSTA New Science Teacher Academy, managed by the National Science Teachers Association, to provide new teachers with mentors and other resources. “Our impact per dollar is much greater when we focus on getting science teachers more excited and competent in what they’re doing,” says Lynn Nixon, global education program manager at Agilent Technologies Foundation.

Beyond the officially sanctioned White House efforts, several other teacher-focused initiatives stand as model programs. They include the Center for Nanoscale Systems’ Institute for Physics Teachers (CIPT) at Cornell University, which since 2001 has been offering high-school physics teachers summer workshops on contemporary topics such as nanotechnology, photonics, and optical communication. In addition, the CIPT maintains an online database containing instructions for 40 lab experiments and an equipment lending library that allows the more than 1300 CIPT alumni in the US to borrow power supplies, multimeters, and other materials needed for the experiments.

“Last year we had over 200 requests for hardware,” says CIPT’s director of education programs, Julie Nucci, who adds that the CIPT-developed experiments are being translated into Spanish in collaboration with the University of Puerto Rico. Nucci says the CIPT is pursuing alternative funding to replace its current NSF grant, which expires in September.

Arts major to physics teacher?

Whereas the CIPT is training current high-school physics teachers, the PhysTEC program focuses on nurturing new ones. Working through university physics departments, PhysTEC attracts physics and engineering majors and helps them get certified as high-school physics teachers (see the article by Theodore Hodapp, Jack Hehn, and Warren Hein in Physics Today, February 2009, page 40). The PhysTEC program is managed by the American Physical Society and the American Association of Physics Teachers and supported by the American Institute of Physics (AIP, which publishes Physics Today). At least 17 US physics departments have adopted the program since it began in 1999.

“There’s a need for about 1200 new physics teachers per year to meet the demand of the more than 1 million students now taking high-school physics, and we’re still only producing 400,” says Theodore Hodapp, APS director of education and diversity. Many current high-school physics teachers aren’t trained physicists. According to AIP’s Statistical Research Center, 54% of the roughly 27 000 high-school physics teachers in the 2008–09 school year did not have a physics degree. That percentage doesn’t include those who were subsequently certified to teach high-school physics. “Sometimes it’s easy to look at numbers and to overlook what’s important, which is making sure that the teacher has the ability to actually teach,” says center director Roman Czujko.

“Give us a successful teacher, and we’ll teach him or her physics,” says Robert Goodman, 2006 New Jersey Teacher of the Year and director of the New Jersey Center for Teaching and Learning, which provides curricula, pedagogical tips, and classroom technologies to local high-school science teachers. The industry CEO turned physics teacher says his approach is to take skilled teachers from any discipline, including from the arts and other humanities, who have an interest in becoming science educators and train them using a curriculum he designed that teaches physics before chemistry and biology. “Physics first” has also been advocated by others, including Physics Nobel laureate Leon Lederman (see his Reference Frame in Physics Today, September 2001, page 11). In the year since the New Jersey center started its teacher recruitment and training initiative, Goodman says the number of physics teachers in Newark has already tripled.

The American dream in peril

Programs like Goodman’s are preferred by politicians who favor local advancement of education policies. “The federal government shouldn’t legislate [education] reform,” says Representative Roscoe Bartlett (R-MD), a PhD physiologist and former professor. To promote STEM, the federal government can inspire the nation by inviting scientists to the White House and recognizing them in other visible ways, Bartlett says. “Although most of the Nobel Prize winners in science still come from the US, we aren’t producing nearly enough scientists anymore. Now, the bright young kids are going into law and political science. We have enough lawyers and political scientists.”

The federal government can partner with local school districts through grants to help teachers prepare for the higher national standards expected for the future, says University of Maryland physicist James Gates, who cochaired the PCAST STEM education report. But no matter who’s pulling the education policy strings, “there’s a real crisis here,” says Gates. “We’re not talking about STEM just to produce more scientists and more engineers. We’re talking about it in order to give our economy a shot at producing the American dream for future generations. If we don’t get this right, the American dream is going to die.”

Tagged with:
Print This Post Print This Post 2,981 views
Jan 26

Reported by Lisa Zyga, in PhysOrg, January 25, 2011.

Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.

Maxwell’s demon can extract work from a single heat reservoir at a cost of spin angular momentum. In step (a), the demon has no memory and the gas in the heat reservoir is in thermal equilibrium. In step (b), the demon divides the reservoir in two, trapping the fastest moving molecules on the right side, and uses a heat engine operating between the two partitions to extract work. In step (c), the demon's memory is erased using a spin reservoir and the two partitions are allowed to return to equilibrium. Image credit: Joan A. Vaccaro, et al. Fig. 1. ©2011 Royal Society.

In the study, Joan Vaccaro from Griffith University in Queensland, Australia, and Stephen Barnett from the University of Strathclyde in Glasgow, UK, have quantitatively described how information can be erased without any , and they also explain why the result is not as contentious as it first appears. Their paper is published in a recent issue of the .

Traditionally, the process of erasing information requires a cost that is calculated in terms of energy – more specifically, heat dissipation. In 1961, Rolf Landauer argued that there was a minimum amount of energy required to erase one bit of information, i.e. to put a bit in the logical zero state. The energy required is positively related to the temperature of the system’s thermal reservoir, and can be thought of as the system’s thermodynamic entropy. As such, this entropy is considered to be a fundamental cost of erasing a bit of information.

However, Vaccaro and Barnett have shown that an energy cost can be fully avoided by using a reservoir based on something other than energy, such as spin angular momentum. Subatomic particles have spin angular momentum, a quantity that, like energy, must be conserved. Basically, instead of heat being exchanged between a qubit and thermal reservoir, discrete quanta of angular momentum are exchanged between a qubit and spin reservoir. The scientists described how repeated logic operations between the qubit’s spin and a secondary spin in the zero state eventually result in both spins reaching the logical zero state. Most importantly, the scientists showed that the cost of erasing the qubit’s memory is given in terms of the quantity defining the logic states, which in this case is spin angular momentum and not energy.

The scientists explained that experimentally realizing this scheme would be very difficult. Nevertheless, their results show that physical laws do not forbid information erasure with a zero energy cost, which is contrary to previous studies. The researchers noted that, in practice, it will be especially difficult to ensure the system’s energy degeneracy (that different spin states of the qubit and reservoir have the exact same energy level). But even if imperfect conditions cause some energy loss, there is no fundamental reason to assume that the cost will be as large as that predicted by Landauer’s formula.

The possibility of erasing information without using energy has implications for a variety of areas. One example is the paradox of Maxwell’s demon, which appears to offer a way of violating the second law of thermodynamics. By opening and closing a door to separate hot and cold molecules, the demon supposedly extracts work from the reservoir, converting all heat into useful mechanical energy. Bennett’s resolution of the paradox in 1982 argues that the demon’s memory has to be erased to complete the cycle, and the cost of erasure is at least as much as the liberated energy. However, Vaccaro and Barnett’s results suggest that the demon’s memory can be erased at no energy cost by using a different kind of reservoir, where the cost would be in terms of spin angular momentum. In this scheme, the demon can extract all the energy from a heat reservoir as useful energy at a cost of another resource.As the scientists explained, this result doesn’t contradict historical statements of the second law of thermodynamics, which are exclusively within the context of heat and thermal reservoirs and do not allow for a broader class of reservoirs. Moreover, even though the example with Maxwell’s demon suggests that mechanical work can be extracted at zero energy cost, this extraction is associated with an increase in the information-theoretic entropy of the overall system.

“The maximization of entropy subject to a constraint need apply not only to heat reservoirs and the conservation of energy,” Vaccaro explained to PhysOrg.com.

The results could also apply to hypothetical Carnot heat engines, which operate at maximum efficiency. If these engines use angular momentum reservoirs instead of thermal reservoirs, they could generate angular momentum effort instead of mechanical work.

As for demonstrating the concept of erasing information at zero energy cost, the scientists said that it would take more research and time.

“We are currently looking at an idea to perform information erasure in atomic and optical systems, but it needs much more development to see if it would actually work in practice,” Vaccaro said.

She added that the result is of fundamental significance, and it’s not likely to have practical applications for memory devices.

“We don’t see this as having a direct impact in terms of practical applications, because the current energy cost of information erasure is nowhere near Landauer’s theoretical bound,” she said. “It’s more a case of what it says about fundamental concepts. For example, Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.”

Read more in Joan A. Vaccaro and Stephen M. Barnett. “Information erasure without an energy cost.” Proceedings of the Royal Society A. DOI:10.1098/rspa.2010.0577

Tagged with:
Print This Post Print This Post 970 views
Jan 25

Reported by Lisa Grossman, in Wired Science, January 21, 2011

Image: flickr/Darren Tunnicliff

In the weird world of quantum physics, two linked particles can share a single fate, even when they’re miles apart.

Now, two physicists have mathematically described how this spooky effect, called entanglement, could also bind particles across time.

If their proposal can be tested, it could help process information in quantum computers and test physicists’ basic understanding of the universe.

“You can send your quantum state into the future without traversing the middle time,” said quantum physicist S. Jay Olson of Australia’s University of Queensland, lead author of the new study.

In ordinary entanglement, two particles (usually electrons or photons) are so intimately bound that they share one quantum state — spin, momentum and a host of other variables — between them. One particle always “knows” what the other is doing. Make a measurement on one member of an entangled pair, and the other changes immediately.

Physicists have figured out how to use entanglement to encrypt messages in uncrackable codes and build ultrafast computers. Entanglement can also help transmit encyclopedias’ worth of information from one place to another using only a few atoms, a protocol called quantum teleportation.

In a new paper posted on the physics preprint website arXiv.org, Olson and Queensland colleague Timothy Ralph perform the math to show how these same tricks can send quantum messages not only from place to place, but from the past to the future.

The equations involved defy simple mathematical explanation, but are intuitive: If it’s impossible to describe one particle without including the other, this logically extends to time as well as space.

“If you use our timelike entanglement, you find that [a quantum message] moves in time, while skipping over the intermediate points,” Olson said. “There really is no difference mathematically. Whatever you can do with ordinary entanglement, you should be able to do with timelike entanglement.”

Olson explained them with a Star Trek analogy. In one episode, “beam me up” teleportation expert Scotty is stranded on a distant planet with limited air supply. To survive, Scotty freezes himself in the transporter, awaiting rescue. When the Enterprise arrives decades later, Scotty steps out of the machine without having aged a day.

“It’s not time travel as you would ordinarily think of it, where it’s like, poof! You’re in the future,” Olson said. “But you get to skip the intervening time.”

According to quantum physicist Ivette Fuentes of the University of Nottingham, who saw Olson and Ralph present the work at a conference, it’s “one of the most interesting results” published in the last year.

“It stimulated our imaginations,” said Fuentes. “We know entanglement is a resource and we can do very interesting things with it, like quantum teleportation and quantum cryptography. We might be able to exploit this new entanglement to do interesting things.”

One such interesting thing could involve storing information in black holes, said physicist Jorma Louko, also of the University of Nottingham.

“They show that you can use the vacuum, that no-particle state, to store a lot of information in just a couple of atoms, and recover that info from other atoms later on,” Louko said. “The details of that have not been worked out, but I can foresee that the ideas that these authors use could be adapted to the black hole context.”

Entanglement in time could also be used to investigate as-yet-untested fundamentals of particle physics. In the 1970s, physicist Bill Unruh predicted that, if a spaceship accelerates through the empty space of a vacuum, particles should appear to pop out of the void. Particles carry energy, so they would be, in effect, a warm bath. Wave a thermometer outside, and it would record a positive temperature.

Called the Unruh effect, this is a solid prediction of quantum field theory. It’s never been observed, however, as a spaceship would have to accelerate at as-yet-unrealistic speeds to generate an effect large enough to be testable. But because timelike entanglement also involves particles emerging from vacuums, it could be used to conduct more convenient searches, relying on time rather than space.

Finding the Unruh effect would provide support for quantum field theory. But it might be even more exciting not to see the effect, Olson said.

“It would be more of a shocking result,” Olson said. “If you didn’t see it, something would be very wrong with our understanding.”

Tagged with:
Print This Post Print This Post 4,996 views
Jan 21

Reported in ScienceDaily, Jan. 20, 2011.

Some amoebae do what many people do. Before they travel, they pack a lunch. In results of a study reported January 19 in the journal Nature, evolutionary biologists Joan Strassmann and David Queller of Rice University show that long-studied social amoebae Dictyostellum discoideum (commonly known as slime molds) increase their odds of survival through a rudimentary form of agriculture.

This is an alternate view of amoebae fruiting bodies, with spores and bacteria. (Credit: Owen Gilbert)

Research by lead author Debra Brock, a graduate student at Rice, found that some amoebae sequester their food–particular strains of bacteria–for later use.

“We now know that primitively social slime molds have genetic variation in their ability to farm beneficial bacteria as a food source,” says George Gilchrist, program director in the National Science Foundation’s Division of Environmental Biology, which funded the research. “But the catch is that with the benefits of a portable food source, comes the cost of harboring harmful bacteria.”

After these “farmer” amoebae aggregate into a slug, they migrate in search of nourishment–and form a fruiting body, or a stalk of dead amoebae topped by a sorus, a structure containing fertile spores. Then they release the bacteria-containing spores to the environment as feedstock for continued growth.

The findings run counter to the presumption that all “Dicty” eat everything in sight before they enter the social spore-forming stage.

Non-farmer amoebae do eat everything, but farmers were found to leave food uneaten, and their slugs don’t travel as far.

Perhaps because they don’t have to.

The advantages of going hungry now to ensure a good food supply later are clear, as farmers are able to thrive in environments in which non-farmers find little food.

The researchers found that about a third of wild-collected Dicty are farmers.

Instead of consuming all the bacteria they encounter, these amoebae eat less and incorporate bacteria into their migratory systems.

Brock showed that carrying bacteria is a genetic trait by eliminating all living bacteria from four farmers and four non-farmers–the control group–by treating them with antibiotics.

All amoebae were grown on dead bacteria; tests confirmed that they were free of live bacteria.

When the eight clones were then fed live bacteria, the farmers all regained their abilities to seed bacteria colonies, while the non-farmers did not.

Dicty farmers are always farmers; non-farmers never learn.

Rice graduate student Tracy Douglas co-authored the paper with Brock, Queller and Strassmann. She confirmed that farmers and non-farmers belong to the same species and do not form a distinct evolved group.

Still, mysteries remain.

The researchers want to know what genetic differences separate farmers from non-farmers. They also wonder why farmer clones don’t migrate as far as their counterparts.

It might be a consequence of bacterial interference, they say, or an evolved response, since farmers carry the seeds of their own food supply and don’t need to go as far.

Also, some seemingly useless or even harmful bacteria are not consumed as food, but may serve an as-yet-undetermined function, Brock says.

That has implications for treating disease as it may, for instance, provide clues to the way tuberculosis bacteria invade cells, says Strassmann, infecting the host while resisting attempts to break them down.

The results demonstrate the importance of working in natural environments with wild organisms whose complex ties to their living environment have not been broken.

Read more in Debra A. Brock, Tracy E. Douglas, David C. Queller, Joan E. Strassmann. Primitive agriculture in a social amoeba. Nature, 2011; 469 (7330): 393 DOI: 10.1038/nature09668

Real also in Wired Science, the article by Dave Mosher, (January 19, 2011) Slime Molds Are Earth’s Smallest, Oldest Farmers.

Tagged with:
Print This Post Print This Post 942 views
Jan 21

Reported in ScienceDaily,  Jan. 20, 2011.

The eyes of moths, which allow them to see well at night, are also covered with a water-repellent, antireflective coating that makes their eyes among the least reflective surfaces in nature and helps them hide from predators in the dark. Mimicking the moth eye’s microstructure, a team of researchers in Japan has created a new film, suitable for mass-production, for covering solar cells that can cut down on the amount of reflected light and help capture more power from the sun.

In a paper appearing in Energy Express, a bi-monthly supplement to Optics Express, the open-access journal published by the Optical Society (OSA), the team describes how this film improves the performance of photovoltaic modules in laboratory and field experiments, and they calculate how the anti-reflection film would improve the yearly performance of solar cells deployed over large areas in either Tokyo, Japan or Phoenix, Ariz.

“Surface reflections are an essential loss for any type of photovoltaic module, and ultimately low reflections are desired,” says Noboru Yamada, a scientist at Nagaoka University of Technology Japan, who led the research with colleagues at Mitsubishi Rayon Co. Ltd. and Tokyo Metropolitan University.

The team chose to look at the effect of deploying this antireflective moth-eye film on solar cells in Phoenix and Tokyo because Phoenix is a “sunbelt” city, with high annual amount of direct sunlight, while Tokyo is well outside the sunbelt region with a high fraction of diffuse solar radiation.

They estimate that the films would improve the annual efficiency of solar cells by 6 percent in Phoenix and by 5 percent in Tokyo.

“People may think this improvement is very small, but the efficiency of photovoltaics is just like fuel consumption rates of road vehicles,” says Yamada. “Every little bit helps.”

Yamada and his colleagues found the inspiration for this new technology a few years ago after they began looking for a broad-wavelength and omnidirectional antireflective structure in nature. The eyes of the moth were the best they found.

The difficulty in making the film, says Yamada, was designing a seamless, high-throughput roll-to-roll process for nanoimprinting the film. This was ultimately solved by Hideki Masuda, one of the authors on the Energy Express paper, and his colleagues at Mitsubishi Rayon Co. Ltd.

The team is now working on improving the durability of the film and optimizing it for many different types of solar cells. They also believe the film could be applied as an anti-reflection coating to windows and computer displays.

Read more in Noboru Yamada, Toshikazu Ijiro, Eiko Okamoto, Kentaro Hayashi, Hideki Masuda. Characterization of antireflection moth-eye film on crystalline silicon photovoltaic module. Optics Express, 2011; 19 (S2): A118 DOI: 10.1364/OE.19.00A118

Tagged with:
Print This Post Print This Post 1,218 views
Jan 18

Reported by By Bob Brown, in Network World, January 11, 2011.

Google Science Fair looks to bring glory to science talent.

Google (NSDQ: GOOG) is urging youths ages 13 to 18 to take part in a worldwide science fair that will be hosted by the search giant online.

Google, in a blog posting titled “Google Science Fair seeks budding Einsteins and Curies,” invokes the story of its founders, onetime computer science students Larry Page and Sergey Brin, to encourage young people to take part in the event.

“Larry and Sergey were fortunate to be able to get their idea in front of lots of people. But how many ideas are lost because people don’t have the right forum for their talents to be discovered? We believe that science can change the world—and one way to encourage that is to celebrate and champion young scientific talent as we do athletes and pop idols,” Google writes.

The Google Science Fair is being conducted in partnership with CERN, The LEGO Group, National Geographic and Scientific American.  Details on how to enter are here, but the basics are that students can enter by themselves or in groups of three by April 4.  Finalists will be invited to participate in a live event at Google headquarters in Silicon Valley. Prizes include everything from a trip to the Galapagos Islands to scholarships, and entrants are free to double dip by submitting projects they are doing for local competitions into the Google Science Fair.

The Google Science Fair isn’t the first time Google has sought to inspire creativity via the contest route. It used to hold an Android Developer Challenge to entice programmers to create apps for Android smartphones. (That effort seems to have worked out pretty well, given the growing popularity of Android devices.)

Tagged with:
Print This Post Print This Post 758 views
Jan 18

Reported by Esther, in io9, 1/13/2001.

Wine makes superconductors better at their jobs. And apparently, it makes some scientists better at their jobs too.

Superconductors behave like most metals; they conduct electricity. They do so, however, with a twist. All metal has some resistance to the flow of electricity. But when the temperature drops, superconductors get less and less resistant (and therefore more conductive). When they reach very low temperatures, their resistance drops to zero.

Yoshihiko Takano and other researchers at the National Institute for Materials Science in Japan were in the process of creating a certain kind of superconductor by putting a compound in hot water and soaking it for hours. They also soaked the compound in a mixture of water and ethanol. It appears the process was going well, because the scientists decided to have a little party. The party included sake, whisky, various wines, shochu, and beer. At a certain point, the researchers decided to try soaking the compound in the many, many liquors they had on hand and seeing how they compared to the more conventional soaking liquids.

When they tested the resulting materials for superconductivity, they found that the ones soaked in commercial booze came out ahead. About 15 percent of the material became a superconductor for the water mixed with ethanol, and less for the pure water. By comparison, Shochu jacked up conductivity by 23 percent and red wine managed to supercharge over 62 percent of the material. The scientists were pleased, if bemused with their results.

So, a little sip of something turns out to make potential superconductors much better at their jobs. And, perhaps, scientists better at their jobs as well.

Via Cornell.

Tagged with:
Print This Post Print This Post 1,207 views
Jan 16

Reported in Discovery News, Mon Jan 10, 2011 (Content provided by AFP/Judith Evans)

Scientists have found a way to store, encrypt and retrieve complex data in the DNA of E. coli.

Instead of changing the building blocks of an organism, Hong Kong researchers developed a way to allow extra information to piggyback on the DNA of the cell. Ingram Publishing/Getty Images

A group of students at Hong Kong’s Chinese University are making strides towards storing such vast amounts of information in an unexpected home: the E. coli bacterium better known as a potential source of serious food poisoning.

“This means you will be able to keep large datasets for the long term in a box of bacteria in the refrigerator,” said Aldrin Yim, a student instructor on the university’s biostorage project, a 2010 gold medallist in the Massachusetts Institute of Technology prestigious iGEM competition.

Biostorage — the art of storing and encrypting information in living organisms — is a young field, having existed for about a decade.

In 2007, a team at Japan’s Keio University said they had successfully encoded the equation that represents Einstein’s theory of relativity, E=MC², in the DNA of a common soil bacterium.

They pointed out that because bacteria constantly reproduce, a group of the single-celled organisms could store a piece of information for thousands of years.

But the Hong Kong researchers have leaped beyond this early step, developing methods to store more complex data and starting to overcome practical problems which have lent weight to skeptics who see the method as science fiction.

The group has developed a method of compressing data, splitting it into chunks and distributing it between different bacterial cells, which helps to overcome limits on storage capacity. They are also able to “map” the DNA so information can be easily located.

This opens up the way to storing not only text, but images, music, and even video within cells.

As a storage method it is extremely compact — because each cell is minuscule, the group says that one gram of bacteria could store the same amount of information as 450 2,000-gigabyte hard disks.

They have also developed a three-tier security fence to encode the data, which may come as welcome news to U.S. diplomats, who have seen their thoughts splashed over the Internet thanks to WikiLeaks.

“Bacteria can’t be hacked,” points out Allen Yu, another student instructor.

“All kinds of computers are vulnerable to electrical failures or data theft. But bacteria are immune from cyber attacks. You can safeguard the information.”

The team have even coined a word for this field — biocryptography — and the encoding mechanism contains built-in checks to ensure that mutations in some bacterial cells do not corrupt the data as a whole.

Professor Chan Ting Fung, who supervised the student team, told AFP that practical work in the field — fostered by MIT, who have helped develop standards enabling researchers to collaborate — was in its early stages.

But he said: “What the students did was to try it out and make sure some of the fundamental principles are actually achievable.”

The Hong Kong group’s work may have a more immediate application.

The techniques they use — removing DNA from bacterial cells, manipulating them using enzymes and returning them to a new cell — are similar to those used to create genetically modified foods.

But rather than changing the building blocks of an organism, the Hong Kong group allows extra information to piggyback on the DNA of the cell, after checking their changes against a master database to make sure they do not have accidental toxic effects.

Their work could enable extra information to be added to a genetically modified crop in the form of a “bio barcode”, Chan said.

“For example, a company that makes a GM tomato that grows extra large with a gene that promotes growth — on top of that we can actually encode additional information like safety protocols, things that are not directly related to the biological system.”

Other types of information, like copyright and design history, could help to monitor the spread of GM crops, he said.

“It’s kind of a safety net for synthetic organisms,” said Wong Kit Ying, from the student team.

Beyond this, Chan and the students are evangelical about the future possibilities of synthetic biology.

“The field is getting popular because of the energy crisis, environmental pollution, climate change. They are thinking that a biological system will be a future solution to those — as alternative energy sources, as a remedy for pollution. For these, micro-organisms are the obvious choice,” Chan said.

One type of bacterium, Deinococcus radiodurans, can even survive nuclear radiation.

“Bacteria are everywhere: they can survive on things that are unthinkable to humans. So we can make use of this,” Chan said.

So is it possible that a home computer could one day consist of a dish filled with micro-organisms?

The group dismisses concerns that this could be dangerous, pointing out that despite E. coli‘s poor reputation, they use an altered form that cannot exist outside a rich synthetic medium.

In fact, says Chan, while safety rules are strict, more measures are taken to protect the bacteria from contamination than to protect the researchers from the bacteria.

However, Yim admitted that while the group’s work is a “foundational advance”, a Petri dish PC is not likely to be on the market in the coming years, not least because the method of retrieving the data requires experts in a laboratory.

“It’s possible,” he said, “but there’s a long way to go.”

Read also in Treehugger.

Tagged with:
Print This Post Print This Post 1,305 views
Jan 13

Reported in MIT News by Jessica Holmes (written by Larry Hardesty, MIT News Office), January 3, 2011

A computer chip that performs imprecise calculations could process some types of data thousands of times more efficiently than existing chips.

Ask a computer to add 100 and 100, and its answer will be 200. But what if it sometimes answered 202, and sometimes 199, or any other number within about 1 percent of the correct answer?

Arithmetic circuits that returned such imprecise answers would be much smaller than those in today’s computers. They would consume less power, and many more of them could fit on a single chip, greatly increasing the number of calculations it could perform at once. The question is how useful those imprecise calculations would be.

If early results of a research project at MIT are any indication, the answer is, Surprisingly useful. About a year ago, Joseph Bates, an adjunct professor of computer science at Carnegie Mellon University, was giving a presentation at MIT and found himself talking to Deb Roy, a researcher at MIT’s Media Lab. Three years earlier, before the birth of his son, Roy had outfitted his home with 11 video cameras and 14 microphones, intending to flesh out what he calls the “surprisingly incomplete and biased observational data” about human speech acquisition. Data about a child’s interactions with both its caregivers and its environment could help confirm or refute a number of competing theories in developmental psychology. But combing through more than 100,000 hours of video for, say, every instance in which either a child or its caregivers says “ball,” together with all the child’s interactions with actual balls, is a daunting task for human researchers and artificial-intelligence systems alike. Bates had designed a chip that could perform tens of thousands of simultaneous calculations using sloppy arithmetic and was looking for applications that leant themselves to it.

Roy and Bates knew that algorithms for processing visual data are often fairly error-prone: A system that identifies objects in static images, for instance, is considered good if it’s right about half the time. Increasing a video-processing algorithm’s margin of error ever so slightly, the researchers reasoned, probably wouldn’t compromise its performance too badly. And if the payoff was the ability to do thousands of computations in parallel, Roy and his colleagues might be able to perform analyses of video data that they hadn’t dreamt of before.

So in May 2010, with funding from the U.S. Office of Naval Research, Bates came to MIT as a visiting professor, working with Roy’s group to determine whether video algorithms could be retooled to tolerate sloppy arithmetic. George Shaw, a graduate student in Roy’s group, began by evaluating an algorithm, commonly used in object-recognition systems, that distinguishes foreground and background elements in frames of video.

To simulate the effects of a chip with imprecise arithmetic circuits, Shaw rewrote the algorithm so that the results of all its numerical calculations were either raised or lowered by a randomly generated factor of between 0 and 1 percent. Then he compared its performance to that of the standard implementation of the algorithm. “The difference between the low-precision and the standard arithmetic was trivial,” Shaw says. “It was about 14 pixels out of a million, averaged over many, many frames of video.” “No human could see any of that,” Bates adds.

Of course, a really useful algorithm would have to do more than simply separate foregrounds and backgrounds in frames of video, and the researchers are exploring what tasks to tackle next. But Bates’ chip design looks to be particularly compatible with image and video processing. Although he hasn’t had the chip manufactured yet, Bates has used standard design software to verify that it will work as anticipated. Where current commercial computer chips often have four or even eight “cores,” or separate processing units, Bates’ chip has a thousand; since they don’t have to provide perfectly precise results, they’re much smaller than conventional cores.

But the chip has another notable idiosyncrasy. In most commercial chips, and even in many experimental chips with dozens of cores, any core can communicate with any other. But sending data across the breadth of a chip consumes much more time and energy than sending it locally. So in Bates’ chip, each core can communicate only with its immediate neighbors. That makes it much more efficient — a chip with 1,000 cores would really be 1,000 times faster than a conventional chip — but it also limits its use. Any computation that runs on the chip has to be easily divided into subtasks whose results have consequences mainly for small clusters of related subtasks — those running on the adjacent cores.

Fortunately, video processing seems to fit the bill. Digital images are just big blocks of pixels, which can be split into smaller blocks of pixels, each of which is assigned its own core. If the task is to, say, determine whether the image changes from frame to frame, each core need report only on its own block. The core associated with the top left corner of the image doesn’t need to know what’s happening in the bottom right corner.

Bates has identified a few other problems that his chip also handles well. One is a standard problem in computer science called “nearest-neighbor search,” in which you have a set of objects that can each be described by hundreds or thousands of criteria, and you want to find the one that best matches some sample. Another is computer analysis of protein folding, in which you need to calculate all the different ways in which the different parts of a long biological molecule could interact with each other.

Bob Colwell, who was the chief architect on several of Intel’s Pentium processors and has been a private consultant since 2000, thinks that the most promising application of Bates’ chip could be in human-computer interactions. “There’s a lot of places where the machine does a lot of work on your behalf just to get information in and out of the machine suitable for a human being,” Colwell says. “If you put your hand on a mouse, and you move it a little bit, it really doesn’t matter where exactly the mouse is, because you’re in the loop. If you don’t like where the cursor goes, you’ll move it a little more. Real accuracy in the input is really not necessary.” A system that can tolerate inaccuracy in the input, Colwell, argues can also tolerate (some) inaccuracy in its calculations. The type of graphics processors found in most modern computers are another example, Colwell says, since they work furiously hard to produce 3-D images that probably don’t need to be rendered perfectly.

Bates stresses that his chip would work in conjunction with a standard processor, shouldering a few targeted but labor-intensive tasks, and Colwell says that, depending on how Bates’ chip is constructed, there could be some difficulty in integrating it with existing technologies. But he doesn’t see any of the technical problems as insurmountable. But “there’s going to be a fair amount of people out in the world that as soon as you tell them I’ve got a facility in my new chip that gives sort-of wrong answers, that’s what they’re going to hear no matter how you describe it,” he adds. “That’s kind of a non-technical barrier, but it’s real nonetheless.”

Tagged with:
Print This Post Print This Post 1,193 views
Jan 13

Reported in Wired Science by Wired Science Staff, December 30, 2010.

In a year full of major advances, over-hyped findings and controversial studies, it was tough for the Wired Science staff to choose which  breakthroughs were the biggest in 2010. So we’ve collected the ones that stood out the most to us.

From synthetic life and three-parent embryos to the possibility of a new human ancestor and a habitable exoplanet, here are the breakthroughs that made us shout “Science!” the loudest this year.

Dinosaur Colors

For the first time, scientists were able to use direct fossil evidence to make a reasonable interpretation of a dinosaur’s color.

Building on the discovery of preserved traces of pigment structures in cells in fossilized dinosaur feathers (above), paleontologists compared the dinosaur cells with the corresponding cells in living birds. By studying the colors created by different combinations of these melanosomes in bird feathers, the researchers recreated the coloring of a recently discovered feathered dinosaur, Anchiornis huxleyi (right).

The dinosaur probably had bright orange feathers on its head and speckled on its throat, a grey body and white accents on its wings.

The same technique was subsequently used to determine the color of a giant fossil penguin.

Images: 1) Sam Ose /Wikimedia Commons 2) Michael DiGiorgio/Yale University

Tagged with:
preload preload preload