Print This Post Print This Post 3,499 views
Feb 28

Reported by Josh Fischman, in The Chronicle of Higher Education, 10 February 2011.

Scientists are wasting much of the data they are creating. Worldwide computing capacity grew at 58 percent every year from 1986 to 2007, and people sent almost two quadrillion megabytes of data to one another, according to a study published on Thursday in Science. But scientists are losing a lot of the data, say researchers in a wide range of disciplines.

In 10 new articles, also published in Science, researchers in fields as diverse as paleontology and neuroscience say the lack of data libraries, insufficient support from federal research agencies, and the lack of academic credit for sharing data sets have created a situation in which money is wasted and information that could reveal better cancer treatments or the causes of climate change goes by the wayside.

“Everyone bears a certain amount of responsibility and blame for this situation,” said Timothy B. Rowe, a professor of geological sciences at the University of Texas at Austin, who wrote one of the articles.

A big problem is the many forms of data and the difficulty of comparing them. In neuroscience, for instance, researchers collect data on scales of time that range from nanoseconds, if they are looking at rates of neuron firing, to years, if they are looking at developmental changes. There are also difference in the kind of data that come from optical microscopes and those that come from electron microscopes, and data on a cellular scale and data from a whole organism.

“I have struggled to cope with this diversity of data,” said David C. Van Essen, chair of the department of anatomy and neurobiology at the Washington University School of Medicine, in St. Louis. Mr. Van Essen co-authored the Science article on the challenges data present to brain scientists. “For atmospheric scientists, they have one earth. We have billions of individual brains. How do we represent that? It’s precisely this diversity that we want to explore.”

He added that he was limited by how data are published. “When I see a figure in a paper, it’s just the tip of the iceberg to me. I want to see it in a different form in order to do a different kind of analysis.” But the data are not available in a public, searchable format.

Ecologists also struggle with data diversity. “Some measurements, like temperature, can be taken in many places and in many ways, ” said O.J. Reichman, a researcher at the National Center for Ecological Analysis and Synthesis, at the University of California at Santa Barbara. “It can be done with a thermometer, and also by how fast an organ grows in a crayfish” because growth is temperature-sensitive, said Mr. Reichman, a co-author of another of the Science articles.

A Big Success Story

The situation criticized in the Science articles contrasts with the big success story in scientific data libraries, GenBank, the gene-sequence repository, said Mr. Reichman and several other scientists. GenBank created a common format for data storage and made it easy for researchers to access it. But Mr. Reichman added that GenBank did not have to deal with the diversity issue.

“GenBank basically had four molecules in different arrangements,” he said. “We have way more than four things in ecology,” he continued, echoing Mr. Van Essen’s lament.

But even gene scientists today say they are struggling with the many permutations of those four molecules. In another Science article, Scott D. Kahn, chief information officer at Illumina, a leading maker of DNA-analysis equipment, notes that output from a single gene-sequencing machine has grown from 10 megabytes to 10 gigabytes per day, and 10 to 20 major labs now use 10 of those machines each. One solution being contemplated, he writes, is to store just one copy of a standard “reference genome” plus mutations that differ from the standard. That amounts to only 0.1 percent of the available data, possibly making it easier for researchers to store the information and analyze it.

To cope with data diversity, Mr. Reichman said scientists should develop a common language for tagging their data. “If you record data from a particular location, the tags about that location—latitude and longitude, for instance—need to be consistent from researcher to researcher,” he said. Ecology has grown into a relatively idiosyncratic science, and all researchers have their own methods, so a common language will require a culture shift. “It’s become more urgent to do this because of the pressing environmental questions, like the effects of climate change, that we are being called on to answer,” he said. “And the ability to access more than one set of measurement or interactions will make the science better.”

Another factor that makes developing shared-data libraries urgent is that many scientists now store their own data. “And when they retire or die, their data goes with them,” said Mr. Rowe. In his field, using three-dimensional-imaging machines like CT scanners to analyze fossils, the first people to do that have already left the field, so there has already been a tremendous loss of data.

There is a financial cost to this, he added. “It costs money to do a CT scan, and the National Science Foundation pays for that with a grant. But if that scan isn’t curated, and disappears when the scientist retires or forgets about it, then the next scientist asks the NSF for money to do it again. That’s just a waste,” he said.

In all of the papers, scientists cited examples of small libraries of shared data that could be scaled up. Mr. Rowe helped to develop a project called DigiMorph, which contains three-dimensional scans of about 1,000 biological specimens and fossils. Those data sets have been viewed by about 80,000 visitors, he said, and have been used in 100 scientific papers. Sharing the data, he said, brings the cost to researchers, and their grant-giving agencies, way down. Another project, the Neuroscience Information Framework, contains many more data sets and has been used by even more scientists.

Mr. Rowe thinks agencies like the NSF and the National Institutes of Health should get behind efforts like this to a much greater extent than they have done. “Right now they are financing data generation, but not the release of that data, or the ability of other scientists to analyze it. I think, with all respect, that they are really missing the boat.”

Tagged with:
Print This Post Print This Post 1,927 views
Feb 26

Reported by Jocelyn Kaiser, in Science, on 23 February 2011

Want to know whether you have cancer? There may soon be an app for that. Cancer researchers have come up with a small device that—with the aid of a smart phone—could allow physicians to find out within 60 minutes whether a suspicious lump in a patient is cancerous or benign.

Instead of immediately cutting out masses that they suspect are tumors, oncologists often use a thick needle to remove a few cells from a lump for an analysis at a pathology lab. But the tests used there, such as examining the shape of cells and staining for various proteins, are sometimes inconclusive. The lab tests also take several days.

As an alternative, physician-scientist Ralph Weissleder’s team at Massachusetts General Hospital (MGH) in Boston developed a miniature version of a nuclear magnetic resonance (NMR) machine—the workhorse tool that allows researchers to identify chemical compounds by the way their nuclei react in magnetic fields. The researchers also found a way to attach magnetic nanoparticles to proteins so that the machine can pick these specific proteins out from a gemisch of chemicals, like those found in a tumor cell sample. A standard chemistry lab’s NMR machine approaches the size of a file cabinet, but the new device is only about as big as a coffee cup.

To see how this might be utilized in the cancer clinic, the MGH researchers used the standard needle procedure to collect suspicious cells from patients’ abdomens. They then labeled the cells with various magnetic nanoparticles designed to attach to known cancer-associated proteins and injected the cells into their miniature NMR machine. The device, whose data can be read with a smart phone application instead of a computer, detected levels of nine protein markers for cancer cells.

By combining results for four of these proteins, the MGH team accurately diagnosed biopsies for 48 of 50 patients in less than an hour per patient. The micro-NMR diagnosis was correct 100% of the time in another set of 20 patients, the MGH team reports today in Science Translational Medicine. By contrast, standard pathology tests on similar samples were correct only 74% to 84% of the time.

Weissleder hopes the device would allow a doctor to test a needle biopsy sample within minutes of collecting it and tell the patient the results as soon as he or she awakes from the procedure. Right now, patients come in for a biopsy, go home, and wait several days for the results. “Our patients hate that week of not knowing if they have cancer,” he says. The strategy should also cut down on repeat biopsies, which typically cost thousands of dollars, he says.

Eventually, the researchers hope to use their mini-NMR device to track the course of cancer and determine whether patients are responding to drugs by detecting levels of specific proteins in blood samples.

Tumor immunologist John Greenman of the University of Hull in the United Kingdom, who also works on so-called lab-on-a-chip devices, calls the study “extremely interesting” as an early example of this technology. What’s key, he says, is that the MGH group has compared its test with standard tests, which “is essential to gain the support of the medical community.” Such devices might have applications far beyond cancer, such as monitoring the environment and detecting biological weapons, he says.

Tagged with:
Print This Post Print This Post 1,233 views
Feb 24

Reported by Lisa Zyga, in PhysOrg, 22 February 2011.

Scientific concepts such as climate change, nanotechnology, and chaos theory can sometimes spring up and capture the attention of both the scientific and public communities, only to be replaced by new ideas later on. Although many factors influence the emergence and decline of such scientific paradigms, a new model has captured how these ideas spread, providing a better understanding of paradigm shifts and the culture of innovation.

This figure shows 12 consecutive states of a system driven by the model, with one unit of time equaling one update for every agent. In the first picture, a new idea is dominating but small specks of color represent a finite innovation rate. A new state dominates between the third and fourth pictures, and in the fourth, fifth, and sixth pictures, two coherent states coexist. New individual dominant states arise in pictures nine and twelve. Image credit: S. Bornholdt, et al. ©2011 American Physical Society.

The researchers, Stefan Bornholdt from the University of Bremen in Bremen, Germany, and Mogens Høgh Jensen and Kim Sneppen from the Niels Bohr Institute in Copenhagen, Denmark, have published their study called “Emergence and Decline of Scientific Paradigms” in a recent issue of Physical Review Letters.

“Our model addresses the interplay between a new idea and the difficulty it has in displacing old ideas in a world where alignment of interests is dominating,” Bornholdt told PhysOrg.com.

Several models of opinion formation already exist, but the new model differs from earlier models in a few important ways. Unlike previous models, the new model allows for an infinite variety of ideas, although each idea has a small probability of being initiated. Also, each idea can appear only once, and an agent (or individual) cannot return to any of the ideas that they have previously held, reflecting scientists’ ongoing hunt for new ideas.

In the model, which is defined on a 2D square lattice, ideas spread in two possible ways. In the first way, an agent adopts a new idea held by its neighbors, with a probability proportional to how many agents already hold this particular idea. In the second way, an agent randomly gets a new idea that does not appear anywhere else in the system with a probability that depends on the “innovation rate.” The first way represents cooperative effects in social systems, while the second way represents innovation.

The model shows how a system with one dominating scientific paradigm transitions into a system with small clusters of ideas, some of which continue to grow until one dominates, and the process repeats with new ideas. The dynamics of the rise and fall of scientific paradigms depends on the system’s innovation rate. Systems with high innovation rates tend to contain a high degree of noise, along with many small domains of ideas that are constantly generated and replaced. In contrast, systems with low innovation rates tend to have low noise and a state that remains dominant for a long time until a single event replaces it.

In addition to providing a theoretical understanding of how scientific paradigms rise and fall, the model also provides insight that helps explain some observations in real life. For instance, the model shows how small systems have the potential to be more dynamic than large systems, which explains why large companies sometimes acquire small start-up firms as a source of innovation.

“Our model indicates that social cooperation makes it more difficult for new ideas to nucleate because of social pressure,” Bornholdt said. “Accordingly, our model finds a ‘winner take all’ dynamic, suggesting a fashion-like dynamic for the prevailing focus of contemporary science.

“Even though our model is extremely simplified and does not deal with right and wrong, it explores the effect of herd mentality in the propagation of ideas,” he added. “Our model suggests that herd mentality makes a larger system less innovative than several smaller ones. In short, for innovation it’s better to listen to yourself than to others.”

Overall, the model shows how new paradigms have a tendency to quickly rise to dominance, to decline slowly, and to quickly be replaced by other paradigms. When the rate is high, the takeover process is chaotic, with many new ideas competing for dominance. Regardless of the idea itself, the model shows that the pattern of paradigm shifts remains fairly consistent over time.

The results could have implications for science philosophy and science policy, as the model suggests that scientific diversity may need special attention. In addition, the researchers are applying the model to the study of the spread of epidemics.

“We are currently studying the ideas of ‘new’ and ‘old’ in epidemics modeling,” Bornholdt said, “where the ‘never-return-policy’ of ideas in the above is associated with immunity of infected hosts: A host that has been infected by a particular virus in the past will be immune to this virus in the future and, thus, will never acquire the same infection twice.”

Read more in: S. Bornholdt, M. H. Jensen, and K. Sneppen. “Emergence and Decline of Scientific Paradigms.” Physical Review Letters 106, 058701 (2011). DOI: 10.1103/PhysRevLett.106.058701

Tagged with:
Print This Post Print This Post 1,595 views
Feb 21

Reported by Nicola Jones, in Nature News, 15 February 2011, updated 17 February 2011.

TV star Watson is a step towards a new kind of search engine.

A superstar supercomputer can carry out powerful searches that may one day be of help to scientists. AP/Press Association Images

IBM’s supercomputer Watson is going up against top players of the US television quiz programme Jeopardy! this week, stirring up excitement in the artificial-intelligence community and prompting computer science departments across the country to gather and watch.

“It is, in my mind, a historic moment,” says Oren Etzioni, director of the Turing Center at the University of Washington, Seattle. “I watched Gary Kasparov playing Deep Blue. This absolutely ranks up there with that.”

Jeopardy! contestants are given clues in the form of answers and must try to work up the right questions. Watson, with its 16-terabyte memory, is capable of tackling normal Jeopardy! clues — including all the puns, quips and ambiguities they typically contain. It dissects the clue, compares it against a ream of facts and rules that it has gleaned from reading a raft of books (from encyclopaedias to the complete works of Shakespeare), and assigns probabilities to its answers before coming up with a response.

In the time it takes host Alex Trebek to read the clue to the human contestants, Watson comes up with an answer and decides whether it is confident enough to ring in. The match, which was filmed in advance in January, airs 14–16 February (see the programme’s Watson website).

Although it might sound like nothing more than a stunt, computer scientists say that Watson is an important advance in artificial intelligence, marking a shift that will create much better search engines and help scientists to keep up in their fields.

What Watson doesn’t do is attempt to mimic the human ability to use common sense, make leaps of logic or imagine the future, notes Patrick Winston of Massachusetts Institute of Technology in Cambridge. As a result, it has failed to capture his interest. “I’m planning to go to bed early. I’ll watch the re-runs,” he says.

Deep thought

The computer system is based on IBM’s DeepQA project, which aims to answer ‘natural-language’ questions in standard English, such as ‘Which nanotechnology companies are hiring on the West Coast?’ The trick is for it to both understand that type of query and provide a meaningful answer. “Good luck getting that from Google,” says Etzioni.

That goal is at the core of many computer science-endeavours, including the long-running artificial-intelligence project Cyc, started in 1984 and now run by Cycorp in Austin, Texas. One trouble with Cyc, says Etzioni, is that its database relies on human beings typing coded facts and knowledge into its system. The alternative is to train computers to learn by reading. There are several big projects in the works on this front — including the Never-Ending Language Learning system (NELL) at Carnegie Mellon University in Pittsburgh, Pennsylvania, and Etzioni’s KnowItAll system — most of which are part-funded by the US Defense Advanced Research Projects Agency.

“I watched Gary Kasparov playing Deep Blue. This absolutely ranks up there with that.”

Oren Etzioni
University of Washington, Seattle

What IBM has done that’s different, says Etzioni, is to focus on a very specific situation (the game of Jeopardy!), spend a lot of time on how to interpret cunning clues, create a database that is the equivalent of about a million books, and find some way to get the system’s performance to shoot up — it comes up with answers in seconds. IBM hasn’t released all the details of how Watson works, so how they have done this is not clear. But Etzioni guesses that the way it collates facts from its reading is similar to how his KnowItAll system approaches the problem. It’s impressive, notes Etzioni, how DeepQA has managed to basically achieve what Cyc set out to do decades ago, but in just a few years.

“Just like with Deep Blue, it’s really bringing together the state-of-the-art in hardware and software,” says Henry Kautz, a computer scientist at the University of Rochester, New York, and president of the Association for the Advancement of Artificial Intelligence in Menlo Park, California, who is also impressed by Watson.

Etzioni says he expects natural-language software to make a big dent in search applications over the next five years, although at the moment systems such as Watson aren’t ready for ‘prime time’: he notes that Microsoft bought a natural-language processing company called Powerset in 2008 for US$100 million, “but you don’t see Microsoft using it in any visible way”. Kautz agrees that systems as broad and powerful as Watson could be available for general use “surprisingly soon… Let’s say three to four years.”

Crying for help

Etzioni argues that a search engine that can deal with natural-language queries is necessary for scientists trying to keep up with the mass of knowledge now being generated in their field, so they can ask, say, “What are the top ten genes currently being studied in cancer research?”, rather than having to trawl through the literature to find out.

Others disagree. Canadian writer Malcolm Gladwell said in a recent discussion about the future of search technologies that current projects “are solving lots of problems that aren’t really problems… You cannot point to any area of intellectual activity or innovation or what have you that is today being compromised or hamstrung by some failure in their search technology. Can we honestly go to some scientist and say the reason we can’t cure cancer is you don’t have access to information about cancer research? No!”

Etzioni laughs at that. “To me, that’s as short-sighted as the famous statement that there’s only a world market for five computers,” he says — a statement that, ironically, is attributed to IBM founder Thomas Watson, after whom Watson is named.

“There’s massive production of knowledge, particularly in the biological community, and researchers can’t keep up with it,” says Etzioni. “Applying these tools specifically for medical researchers to keep track of what’s relevant in what they’re interested in is a huge area of my field. It’s true we don’t yet have a killer app, but you talk to anybody and they’re crying out for help.”

Updated:

Watson won against the human contestants. IBM plans to donate their US$1 million winnings to charity. The final scores were: Watson $77,147, Ken Jennings $24,000 and Brad Rutter $21,600.

Read more in Nature | doi:10.1038/news.2011.95 as well as in Designing a computer that can process and understand natural language.

Tagged with:
Print This Post Print This Post 1,334 views
Feb 20

Reported by Chris Mooney in Discover, February 18th 2011.

From the text of John Holdren’s recent congressional testimony on the science budget (also available here):

All told, this Budget proposes $66.8 billion for civilian research and development, an increase of $4.1 billion or 6.5 percent over the 2010 funding level in this category. But the Administration is committed to reducing the deficit even as we prime the pump of discovery and innovation. Accordingly, our proposed investments in R&D, STEM education, and infrastructure fit within an overall non-security discretionary budget that would be frozen at 2010 levels for the second year in a row. The Budget reflects strategic decisions to focus resources on those areas where the payoff for the American people is likely to be highest.

This is similar to what Chris Mooney argued with Meryl Comer in the Los Angeles Times in December–tough economic times are the times to invest in science, not cut it.

Holdren concludes:

Let me reiterate, in closing, the guiding principle underlying this Budget: America’s strength, prosperity, and global leadership depend directly on the investments we’re willing to make in R&D, in STEM education, and in infrastructure.

Investments in these domains are the ultimate act of hope, the source of the most important legacy we can leave. Only by sustaining them can we assure future generations of Americans a society and place in the world worthy of the history of this great Nation, which has been building its prosperity and global leadership on a foundation of science, technology, and innovation since the days of Jefferson and Franklin.

In hard times, you don’t give up on vision. You knuckle down, sure, but you also look ahead. (Meanwhile, Paul Krugman reminds us today that the budget debate is deeply mis-aligned because we’re so focused on cutting the smallest part of the budget, when the real issues are healthcare costs and tax revenue.)

Tagged with:
Print This Post Print This Post 2,411 views
Feb 18

Reported by Mark Anderson, in IEEE Spectrum, 14 February 2011.

Researchers upend understanding of olfactory organs with quantum tunneling experiment

Drosophila melanogaster It doesn't just smell. It quantum tunnels. Mr.checker via Wikimedia.

Flash memory, scanning tunneling microscopes…and a fly’s sense of smell. According to new research, the same strange phenomenon—quantum tunneling—makes all three possible. If confirmed, the discovery could pave the way for a new generation of artificial scents, from perfumes to pheromones—and, perhaps someday, artificial noses.

The conventional theory of smell holds that the nose’s chemical receptors—some 400 different kinds in a human nose—sense the presence of odorant molecules by a lock-and-key process that reads the odorant’s physical shape. That theory has some problems, though. For instance, ethanol (which smells like vodka) and ethanethiol (which smells like rotten eggs) have essentially the same shape, differing from each other by only a single atom. (Ethanol is C2H6O, and ethanethiol is C2H6S.)

Evidence has emerged over the past decade suggesting that at least part of a molecule’s scent comes from chemical receptors in the nose that pump current through the odorant molecule and cause it to vibrate in an identifiable way. Lacking a direct electrical hookup to the odorant, the nose’s receptors would likely transmit electrons via quantum tunneling, a well-studied process that allows electrons to hop through nonconducting regions if they are small enough. Tunneling is what allows charge to be stored in flash memory cells. It also forms the image in scanning tunneling electron microscopes and is a source of wasted power in microchips.

A group of four scientists from MIT and the Alexander Fleming Biomedical Sciences Research Center, in Vari, Greece, says it has proved the tunneling theory in fruit flies (Drosophila), a favorite lab specimen of geneticists. If the quantum ”molecular vibrational” theory of smell is correct, then the flies should be able to smell the difference between molecules that have regular hydrogen in them versus the same molecules that contain its heavier isotope, deuterium. (The nucleus of regular hydrogen is a proton; the nucleus of deuterium is a proton plus a neutron.)

While regular and ”deuterated” molecules have exactly the same shape, when set in motion by the receptors’ tiny tunneling current, the two would vibrate in a very different pattern of molecular wobbles. A tunneling-based sense of smell should be able to detect the different vibrations. Deuterated molecules would, in other words, smell different to the flies.

According to the new research—published in this week’s Proceedings of the National Academy of Sciences—the flies appear to be able to smell the difference. The researchers had set up a tiny maze with a T-junction. To the right, the molecule acetophenone (which has a sweet and flowery scent to human noses) filled the air. To the left, the air contained an isotope of acetophenone in which eight of the molecule’s hydrogens were swapped out for deuterium. Nearly 30 percent more flies went toward the regular acetophenone. Meanwhile, flies that had been genetically engineered to lack a sense of smell showed no preference.

The group repeated the experiment with a different molecule, octanol, and its deuterated cousin, and got the same result. In a third experiment, the researchers in Greece trained their flies to avoid a deuterated octanol. The vibrations between carbon and deuterium in octanol, says study coauthor Luca Turin, a visiting scientist in biomedical engineering at MIT, are very similar to those of carbon and nitrogen in chemicals called nitriles. So if the vibration theory holds, then some deuterated molecules might share similar odors to the nitriles, even though their shapes are worlds apart.

Indeed, when the trained flies were presented with a whiff of a nitrile, they avoided it. Conversely, flies conditioned to avoid nitriles also avoided the deuterated octanol. ”The only thing the deuterium and nitrile have in common is vibration [pattern],” Turin says.

Andrew Horsfield, senior lecturer in the department of materials at Imperial College London, has been working on early applications of the vibrational theory that Turin’s group tested. Horsfield and his colleagues have developed an indium-arsenide nanowire detector that might crudely ”smell” itself by tunneling electrons between special structures within the nanowire and reading off the vibrations produced. Horsfield’s group plans to extend this idea to smaller devices that can smell external molecules. The group has so far only been fine-tuning its nanowire setup, and Horsfield says its first ”sniff” test might be more than a year away.

Horsfield says that the research being done by Turin’s group justifies his group’s continued nanowire studies. ”It’s very clear to me that this is a very important paper,” he says. ”Time will prove that to be true.”

Tagged with:
Print This Post Print This Post 1,289 views
Feb 16

Reported by Duncan Geere, Wired UK, February 14, 2011.

Researchers have discovered the first case of a direct transfer of a human DNA fragment to a bacterial genome. The guilty party? Gonorrhea.

A photomicrograph of a T3 colony of N. gonorrhoeae bacteria magnified at 100X. (Dr. Stephen J. Kraus/CDC)

Scientists have known that genes could transfer between different bacteria, and even between bacteria and yeast cells, but biologists at Northwestern University’s Feinberg School of Medicine discovered that Neisseria gonorrhoeae, the bacteria responsible for gonorrhea, had stolen a sequence of DNA bases (adenine, cytosine, guanine and thymine) from an L1 DNA element found in humans.

Hank Seifert, a senior author of the paper describing the research, due to be published in mBio, said in a press release: “This has evolutionary significance because it shows you can take broad evolutionary steps when you’re able to acquire these pieces of DNA. The bacterium is getting a genetic sequence from the very host it’s infecting. That could have far reaching implications as far as how the bacteria can adapt to the host.”

Seifert also screened the bacteria that causes meningitis, Neisseria meningitidis, which is very similar to the gonorrhea bacteria at the genetic level. There was no sign of the human DNA signature, suggesting that the gene transfer occurred relatively recently.

What isn’t known yet is what the sequence actually does, and therefore whether the theft will convey any evolutionary advantage to the bacteria. That’s what Seifert reckons he’ll focus on next, adding in the release: “Human DNA to a bacterium is a very large jump. This bacterium had to overcome several obstacles in order to acquire this DNA sequence. The next step is to figure out what this piece of DNA is doing.”

Tagged with:
Print This Post Print This Post 1,262 views
Feb 15

Reported by Ian Sample, science correspondent in guardian.co.uk, Thursday 30 December 2010.

University refuses to remove from its website a student’s thesis revealing flaw in chip-and-pin security system of bank cards.

The thesis describes a flaw in chip-and-pin technology that allows criminals to use stolen bank cards. Photograph: Alex Segre/Alamy

A powerful bankers’ association has failed in its attempt to censor a student thesis after complaining that it revealed a loophole in bank card security.

The UK Cards Association, which represents major UK banks and building societies, asked Cambridge University to remove the thesis from its website, but the request was met with a blunt refusal.

In a letter to university authorities, UKCA chair Melanie Johnson – a former Labour MP who was economic secretary to the Treasury in Tony Blair’s government – demanded that the masters thesis be “removed from public access immediately”.

The thesis by computer security student Omar Choudary, entitled “The smart card detective: a handheld EMV interceptor”, described a flaw in the chip-and-pin (personal identification number) security system that allows criminals to make fraudulent transactions with a stolen bank card using any pin they care to choose.

“It is the publication of this level of detail which we believe breaches the boundary of responsible disclosure. Essentially, it places in the public domain a blueprint for building a device which purports to exploit a loophole in the security of chip and PIN,” the letter states.

But in a reply to the UKCA, Ross Anderson, professor of security engineering at the university’s Computer Laboratory, refused to take down the thesis and said the loopholes had already been disclosed to bankers.

“You seem to think we might censor a student’s thesis, which is lawful and already in the public domain, simply because a powerful interest finds it inconvenient. This shows a deep misconception of what universities are and how we work. Cambridge is the University of Erasmus, of Newton and of Darwin; censoring writings that offend the powerful is offensive to our deepest values,” Anderson wrote.

Anderson and his colleagues discovered the loophole in chip-and-pin security in October 2009 and told the banks about the flaw later that year. They revealed the loophole publicly on the BBC’s Newsnight programme in February 2010.

In view of the UKCA’s letter, Anderson has authorised Choudary’s thesis to be published as a Computer Laboratory technical report.

“This will make it easier for people to find and cite, and will ensure that its presence on our website is permanent,” his reply to the UKCA states.

“It is outrageous that the banking industry should try to censor a student’s thesis even though it was lawful and already in the public domain,” Anderson told the Guardian.

“It was particularly surprising for its chair, Melanie Johnson, to make this request; as a former MP she must be aware of the Human Rights Act, and as a former Cambridge graduate student she should have a better understanding of this university’s culture.

“Her intervention was completely counterproductive for the banks who employ her: Omar’s thesis will now be read by thousands of people who would otherwise not have heard of it,” he said.

Tagged with:
Print This Post Print This Post 1,018 views
Feb 11

Reported in PhysOrg, Feb. 9, 2011.

In a complex feat of nanoengineering, a team of scientists at Kyoto University and the University of Oxford have succeeded in creating a programable molecular transport system, the workings of which can be observed in real time. The results, appearing in the latest issue of Nature Nanotechnology, open the door to the development of advanced drug delivery methods and molecular manufacturing systems.

Resembling a monorail train, the system relies on the self-assembly properties of DNA origami and consists of a 100 nm track together with a motor and fuel. Using (AFM), the research team was able to observe in real time as this motor traveled the full length of the track at a constant average speed of around 0.1 nm/s.

“The track and motor interact to generate forward motion in the motor,” explained Dr. Masayuki Endo of Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS). “By varying the distance between the rail ‘ties,’ for example, we can adjust the speed of this motion.”

The research team, including lead author Dr. Shelley Wickham at Oxford, anticipates that these results will have broad implications for future development of programable molecular assembly lines leading to the creation of synthetic ribosomes.

origami techniques allow us to build nano- and meso-sized structures with great precision,” elaborated iCeMS Prof. Hiroshi Sugiyama. “We already envision more complex track geometries of greater length and even including junctions. Autonomous, molecular manufacturing robots are a possible outcome.”

Read More: The article, “Direct observation of stepwise movement of a synthetic molecular transporter” by Shelley F. J. Wickham, Masayuki Endo, Yousuke Katsuda, Kumi Hidaka, Jonathan Bath, Hiroshi Sugiyama, and Andrew J. Turberfield, was published online in the February 6, 2011 issue of Nature Nanotechnology (DOI:10.1038/NNANO.2010.284).

Tagged with:
Print This Post Print This Post 1,278 views
Feb 10

Reported by PhysOrg, February 9 2011.

Engineers and scientists collaborating at Harvard University and the MITRE Corporation have developed and demonstrated the world’s first programmable nanoprocessor.

This is a false-color scanning electron microscopy image of a programmable nanowire nanoprocessor super-imposed on a schematic nanoprocessor circuit architecture. Credit: Photo courtesy of Charles M. Lieber, Harvard University.

The groundbreaking prototype computer system, described in a paper appearing today in the journal Nature, represents a significant step forward in the complexity of computer circuits that can be assembled from synthesized nanometer-scale components.

It also represents an advance because these ultra-tiny nanocircuits can be programmed electronically to perform a number of basic arithmetic and logical functions.

“This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future,” says principal investigator Charles M. Lieber, who holds a joint appointment at Harvard’s Department of Chemistry and Chemical Biology and School of Engineering and Applied Sciences.

The work was enabled by advances in the design and synthesis of nanowire building blocks. These nanowire components now demonstrate the reproducibility needed to build functional , and also do so at a size and material complexity difficult to achieve by traditional top-down approaches.

Moreover, the tiled architecture is fully scalable, allowing the assembly of much larger and ever more functional nanoprocessors.

“For the past 10 to 15 years, researchers working with nanowires, carbon nanotubes, and other have struggled to build all but the most basic circuits, in large part due to variations in properties of individual nanostructures,” says Lieber, the Mark Hyman Professor of Chemistry. “We have shown that this limitation can now be overcome and are excited about prospects of exploiting the bottom-up paradigm of biology in building future electronics.”

An additional feature of the advance is that the circuits in the nanoprocessor operate using very little power, even allowing for their miniscule size, because their component nanowires contain transistor switches that are “nonvolatile.”

This means that unlike transistors in conventional microcomputer circuits, once the nanowire transistors are programmed, they do not require any additional expenditure of electrical power for maintaining memory.

“Because of their very small size and very low power requirements, these new nanoprocessor circuits are building blocks that can control and enable an entirely new class of much smaller, lighter weight electronic sensors and consumer electronics,” says co-author Shamik Das, the lead engineer in MITRE’s Nanosystems Group.

“This new nanoprocessor represents a major milestone toward realizing the vision of a nanocomputer that was first articulated more than 50 years ago by physicist Richard Feynman,” says James Ellenbogen, a chief scientist at MITRE.

Tagged with:
preload preload preload