Print This Post Print This Post 1,294 views
Oct 31

Reported by Alexis Madrigal in Wired Science

The quest by a group of math geeks to create a three-dimensional analogue for the mesmerizing Mandelbrot fractal has ended in success.

They call it the Mandelbulb. The 3-D renderings were generated by applying an iterative algorithm to a sphere. The same calculation is applied over and over to the sphere’s points in three dimensions. In spirit, that’s similar to how the original 2-D Mandelbrot set generates its infinite and self-repeating complexity.

If you were ever mesmerized by the Mandelbrot screen saver, the following images are worth a look. Each photo is a zoom on one of these Mandelbulbs.

Also, see Wired Science gallery of fractals in nature

Read More: http://www.wired.com/wiredscience/2009/12/mandelbulb-gallery/#ixzz13yLLVcxH

Tagged with:
Print This Post Print This Post 1,424 views
Oct 31

Reported by Bruce Schneier in his blog at June 30, 2010

“For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data — data at rest — it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible — so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that — I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.”

Read more in Schneier on Security

Tagged with:
Print This Post Print This Post 1,369 views
Oct 30

Reported by Rowan Hooper, news editor in NewScientist, 12 October 2010

A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time. No big deal, you might think. After all, computers have been beating humans at western chess for years, and when IBM’s Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity.

That hasn’t happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.

The Mainichi Daily News reports that top women’s shogi player Ichiyo Shimizu took part in a match staged at the University of Tokyo, playing against a computer called Akara 2010. Akara is apparently a Buddhist term meaning 10224, the newspaper reports, and the system beat Shimizu in six hours, over the course of 86 moves.

Japan’s national broadcaster, NHK, reported that Akara “aggressively pursued Shimizu from the beginning”. It’s the first time a computer has beaten a professional human player.

The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu’s defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.

Perhaps the association doesn’t mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player. Meanwhile, humans will have to face up to more flexible computers, capable of playing more than just one kind of game.

And IBM has now developed Watson, a computer designed to beat humans at the game show Jeopardy. Watson, says IBM, is “designed to rival the human mind’s ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers”. IBM say they have improved artificial intelligence enough that Watson will be able to challenge Jeopardy champions, and they’ll put their boast to the test soon, says The New York Times.

I’ll leave you with these wise and telling words from the defeated Shimizu: “It made no eccentric moves, and from partway through it felt like I was playing against a human,” Shimizu told the Mainichi Daily News. “I hope humans and computers will become stronger in the future through friendly competition.”

Read more in NewScientist blogs

Tagged with:
Print This Post Print This Post 1,383 views
Oct 30

by Laura A. Garrison and David S. Babcock in Complexity, Volume 14, Issue 6, pages 35–44, July/August 2009

Abstract: An agent-based C++ program was developed to model student drinking. Student-agents interact with each other and are randomly subjected to good or bad drinking experiences, to stories of other students’ experiences, and to peer pressure. The program outputs drinking rates as functions of time based on various input parameters. The intent of this project is to simulate alcohol use, eventually adding other drugs, and possibly creating a simulation game for use as an educational tool.

Read more: Article in pdf format

Tagged with:
Print This Post Print This Post 863 views
Oct 27

Reported by Larry Hardesty, MIT News Office

The max-flow problem, which is ubiquitous in network analysis, scheduling, and logistics, can now be solved more efficiently than ever.

The maximum-flow problem, or max flow, is one of the most basic problems in computer science: First solved during preparations for the Berlin airlift, it’s a component of many logistical problems and a staple of introductory courses on algorithms. For decades it was a prominent research subject, with new algorithms that solved it more and more efficiently coming out once or twice a year. But as the problem became better understood, the pace of innovation slowed. Now, however, MIT researchers, together with colleagues at Yale and the University of Southern California, have demonstrated the first improvement of the max-flow algorithm in 10 years.

The max-flow problem is, roughly speaking, to calculate the maximum amount of “stuff” that can move from one end of a network to another, given the capacity limitations of the network’s links. The stuff could be data packets traveling over the Internet or boxes of goods traveling over the highways; the links’ limitations could be the bandwidth of Internet connections or the average traffic speeds on congested roads.

More technically, the problem has to do with what mathematicians call graphs. A graph is a collection of vertices and edges, which are generally depicted as circles and the lines connecting them. The standard diagram of a communications network is a graph, as is, say, a family tree. In the max-flow problem, one of the vertices in the graph — one of the circles — is designated the source, where all the stuff comes from; another is designated the drain, where all the stuff is headed. Each of the edges — the lines connecting the circles — has an associated capacity, or how much stuff can pass over it.

Hidden flows

Such graphs model real-world transportation and communication networks in a fairly straightforward way. But their applications are actually much broader, explains Jonathan Kelner, an assistant professor of applied mathematics at MIT, who helped lead the new work. “A very, very large number of optimization problems, if you were to look at the fastest algorithm right now for solving them, they use max flow,” Kelner says. Outside of network analysis, a short list of applications that use max flow might include airline scheduling, circuit analysis, task distribution in supercomputers, digital image processing, and DNA sequence alignment.

Traditionally, Kelner explains, algorithms for calculating max flow would consider one path through the graph at a time. If it had unused capacity, the algorithm would simply send more stuff over it and see what happened. Improvements in the algorithms’ efficiency came from cleverer and cleverer ways of selecting the order in which the paths were explored.

Graphs to grids

But Kelner, CSAIL grad student Aleksander Madry, math undergrad Paul Christiano, and Professors Daniel Spielman and Shanghua Teng of, respectively, Yale and USC, have taken a fundamentally new approach to the problem. They represent the graph as a matrix, which is math-speak for a big grid of numbers. Each node in the graph is assigned one row and one column of the matrix; the number where a row and a column intersect represents the amount of stuff that may be transferred between two nodes.

In the branch of mathematics known as linear algebra, a row of a matrix can also be interpreted as a mathematical equation, and the tools of linear algebra enable the simultaneous solution of all the equations embodied by all of a matrix’s rows. By repeatedly modifying the numbers in the matrix and re-solving the equations, the researchers effectively evaluate the whole graph at once. This approach, which Kelner will describe at a talk at MIT’s Stata Center on Sept. 28, turns out to be more efficient than trying out paths one by one.

If N is the number of nodes in a graph, and L is the number of links between them, then the execution of the fastest previous max-flow algorithm was proportional to (N + L)(3/2). The execution of the new algorithm is proportional to (N + L)(4/3). For a network like the Internet, which has hundreds of billions of nodes, the new algorithm could solve the max-flow problem hundreds of times faster than its predecessor.

The immediate practicality of the algorithm, however, is not what impresses John Hopcroft, the IBM Professor of Engineering and Applied Mathematics at Cornell and a recipient of the Turing Prize, the highest award in computer science. “My guess is that this particular framework is going to be applicable to a wide range of other problems,” Hopcroft says. “It’s a fundamentally new technique. When there’s a breakthrough of that nature, usually, then, a subdiscipline forms, and in four or five years, a number of results come out.”

Read more:

Paper: “Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs” (PDF)

by Jonathan Kelner

Tagged with:
Print This Post Print This Post 2,459 views
Oct 27

Reported By Stewart Mitchell in PCPro

Mobile phone security could come face-to-face with science fiction following a demonstration of biometric technology by scientists at the University of Manchester.

Mobile phones carry an ever-increasing level of personal data, but PIN-code log-ins can prove weak – if even applied – leaving those details vulnerable if a handset is stolen.

Facial recognition software could change that picture drastically.

“The idea is to recognise you as the user, and it does that by first taking a video of you, so it has your voice and lots of images for comparison,” said Phil Tresadern, lead researcher on the project.

Before letting anyone access the handset, the software examines an image of the person holding the handset, cross-referencing 22 geographical landmarks on the face with that of the owner.

We had to change some of the floating point calculations to fixed points with whole numbers to make it more efficient, but it ported across quite easily

“Existing mobile face trackers give only an approximate position and scale of the face,” said Tresadern. “Our model runs in real time and accurately tracks a number of landmarks on and around the face such as the eyes, nose, mouth and jaw line.”

The scientists claim their method is unrivalled for speed and accuracy and works on standard smartphones with front-facing cameras.

Facial recognition technology is nothing new, but squeezing something so computationally complicated onto a smartphone is quite an achievement.

“We have a demo that we have shown to potential partners that is running on Linux on a Nokia N900,” said Tresadern, adding that his team had to streamline the software to make it run on a mobile phone.

“We had to change some of the floating point calculations to fixed points with whole numbers to make it more efficient,” he said. “But other than that it ported across quite easily.”

The Manchester team said the technology has already attracted interest from “someone interested in putting it into the operating system for Nokia” and that the software could be licensed by app developers for any mobile device.
Read more: Facial recognition security to keep handset data safe | Security | News | PC Pro http://www.pcpro.co.uk/news/security/362287/facial-recognition-security-to-keep-handset-data-safe#ixzz13aZsL2mn

Tagged with:
Print This Post Print This Post 1,261 views
Oct 27

(PhysOrg.com) — While some power companies scour the globe for steady winds to drive giant turbines, a biologist is turning to lowly termites and their lofty mounds to understand how to harness far more common intermittent breezes, seeking ideas that could drive nature-inspired building systems whose “sloshing” air movement could provide ventilation and cooling.

During a presentation sponsored by Harvard’s Wyss Institute for Biologically Inspired Engineering, biologist J. Scott Turner debunked some 50-year-old assertions that termite mounds’ complex tunnel structure works to circulate air in an orderly manner from a nest chamber low in the mound, up a central chimney away from the nest, and, as the air cools, down small outer tunnels to the bottom of the nest. Credit: Rose Lincoln/Harvard Staff Photographer

J. Scott Turner, a biology professor at the State University of New York, brought his termite-mound studies to Harvard’s School of Engineering and Applied Sciences Wednesday (Oct. 20) in a talk sponsored by Harvard’s Wyss Institute for Biologically Inspired Engineering.

Turner’s topic, “New Concepts in Termite-Inspired Design,” presented the results of years of research into the structure of termite mounds in Namibia, including an extraordinary effort to fill a mound’s tunnels with plaster and then slice off millimeter-thick layers to create a cross-sectional map of the insides.

Turner’s work debunked some 50-year-old assertions that termite mounds’ complex tunnel structure works to circulate air in an orderly manner from a nest chamber low in the mound, up a central chimney away from the nest, and, as the air cools, down small outer tunnels to the bottom of the nest.

That understanding of termite mound function has already inspired human architecture — including a building in Zimbabwe designed without air conditioning that instead uses wind energy and heat-storing materials to maintain a moderate temperature. The only problem with these sorts of termite-inspired designs, Turner said, is that his studies show that the mounds actually don’t work that way.

Deploying temperature and humidity gauges, and armed with tracer gases, Turner found that a termite mound does not regulate interior temperature. The temperature inside the mound was not appreciably different from that of the surrounding ground, rising during some parts of the year and then falling. In addition, he found that the air in the nest didn’t really circulate. Instead, it was stable, with cooler air in the nest low in the mound and hotter air in the mound’s upper portions and chimney.

The same fluctuation wasn’t found with humidity, which was maintained at roughly 80 percent year-round. But it isn’t the mound or its design that does that job, Turner said. Instead, actively move water within and out of the mound as they transport water-soaked earth. In addition, the symbiotic fungi that live in the mound with the termites also help to regulate humidity. The fungi, which help the termites digest tough cellulose in the plant material the insects bring into the nest, form complex, folded bodies that absorb excess humidity during wet months and release water during dry months, Turner said. This helps to maintain a stable humidity, dry enough to keep moisture-loving fungal competitors at bay.

“They actively regulate nest moisture, but not through design of the mound,” Turner said.

So, if the mound itself doesn’t regulate heat or humidity, Turner and his collaborators wondered, what does the elaborate branching system of tunnels do?

The answer came on further investigation, when researchers found that the tunnels work as an air exchange system. The smaller tunnels on the mound’s surface, used by workers to move in and out of the mound, also serve to mute the gusty, turbulent air outside the mound. Those high-energy gusty breezes are blocked in the surface tunnels, allowing more gentle air movements to penetrate the mound in a pulsing, in-and-out process akin to a breathing human lung. Through this process, fresh air is exchanged into the deepest part of the mound, “sloshing” in and out in a tidal movement that refreshes the mound’s air.

“We think these mounds are quite efficient manipulators of transient energy in turbulent wind,” Turner said. “That’s how the mound breathes.”

That in-and-out sloshing, Turner said, provides a model for building design. Though most people would refuse to live in a building resembling a termite mound, the tunnel structure could be replicated in building materials used in exterior surfaces, saving energy through passive air exchange systems in everyday, ordinary buildings.

“There might be some really interesting architectural opportunities,” Turner said.

Provided by Harvard University (news : web)

Tagged with:
Print This Post Print This Post 1,320 views
Oct 26

(PhysOrg.com) — Bumblebees can find the solution to a complex mathematical problem which keeps computers busy for days.

Scientists at Queen Mary, University of London and Royal Holloway, University of London have discovered that learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the ‘Travelling Salesman Problem‘, and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

Professor Lars Chittka from Queen Mary’s School of Biological and Chemical Sciences said: “In nature, bees have to link hundreds of flowers in a way that minimises travel distance, and then reliably find their way home – not a trivial feat if you have a brain the size of a pinhead! Indeed such travelling salesmen problems keep supercomputers busy for days. Studying how bee brains solve such challenging tasks might allow us to identify the minimal required for complex problem solving.”

The team used computer controlled artificial flowers to test whether bees would follow a route defined by the order in which they discovered the flowers or if they would find the shortest route. After exploring the location of the flowers, bees quickly learned to fly the shortest route.

As well as enhancing our understanding of how bees move around the landscape pollinating crops and wild , this research, which is due to be published in The American Naturalist this week, has other applications. Our lifestyle relies on networks such as traffic on the roads, information flow on the web and business supply chains. By understanding how bees can solve their problem with such a tiny brain we can improve our management of these everyday networks without needing lots of computer time.

Co-author and Queen Mary colleague, Dr. Mathieu Lihoreau adds: “There is a common perception that smaller brains constrain animals to be simple reflex machines. But our work with bees shows advanced cognitive capacities with very limited neuron numbers. There is an urgent need to understand the neuronal hardware underpinning animal intelligence, and relatively simple nervous systems such as those of insects make this mystery more tractable.”

Provided by Queen Mary, University of London (news : web)

Tagged with:
Print This Post Print This Post 2,655 views
Oct 24

Reported by Clive Cookson in FT

A new photonic chip that works on light rather than electricity has been built by an international research team, paving the way for the production of ultra-fast quantum computers with capabilities far beyond today’s devices.

Future quantum computers will, for example, be able to pull important information out of the biggest databases almost instantaneously. As the amount of electronic data stored worldwide grows exponentially, the technology will make it easier for people to search with precision for what they want.

An early application will be to investigate and design complex molecules, such as new drugs and other materials, that cannot be simulated with ordinary computers. More general consumer applications should follow.

Jeremy O’Brien, director of the UK’s Centre for Quantum Photonics, who led the project, said many people in the field had believed a functional quantum computer would not be a reality for at least 25 years.

“However, we can say with real confidence that, using our new technique, a quantum computer could, within five years, be performing calculations that are outside the capabilities of conventional computers,” he told the British Science Festival, as he presented the research.

The breakthrough, published today in the journal Science, means data can be processed according to the counterintuitive rules of quantum physics that allow individual subatomic particles to be in several places at the same time.

This property will enable quantum computers to process information in quantities and at speeds far beyond conventional supercomputers. But formidable technical barriers must be ­overcome before quantum ­computing becomes practical.

The team, from Bristol university in the UK, Tohuku university in Japan, Weizmann Institute in Israel and Twente university in the Netherlands, say they have overcome an important barrier, by making a quantum chip that can work at ordinary temperatures and pressures, rather than the extreme conditions required by other approaches.

The immense promise of quantum computing has led governments and companies worldwide to invest hundreds of millions of dollars in the field.

Big spenders, including the US defence and intelligence agencies concerned with the national security issues, and governments – such as Canada, Australia and Singapore – see quantum electronics as the foundation for IT industries in the mid-21st century.

Computing’s great leap forward

Why quantum computing?

To make use of properties that emerge on an ultra-small scale. “Entanglement” – the ability of subatomic particles to influence one another at a distance – and “superposition” – the fact that a particle does not have a definite location and can be in several places at once – are the two most important properties.

Yes, it’s weird but why is it useful?

Because quantum particles can do very many things at the same time, unlike an electronic “bit” in conventional computing. The use of quantum particles, or “qubits”, permits parallel computing on a scale that would not be possible with conventional electronics.

What particles are you talking about?

Many scientists are working with atoms or ions trapped in ultra-cold conditions. But the latest discovery by the Bristol-led team uses photons – light particles.

How does a quantum chip actually work?

There are several models. The Bristol version sends “entangled” photons down networks of circuits in a silicon chip. The particles perform a co-ordinated “quantum walk”, whose outcome represents the results of a calculation.
Of course, special software and input-output devices will have to be developed
to make practical use of the device.

Tagged with:
Print This Post Print This Post 1,437 views
Oct 23

Science magazine Discover has published a nice article on poker called Big Game Theory. While the piece, written by Jennifer Ouellette, is aimed at people who are not familiar with the game, it does contain some enlightening parts for experienced players too.

“What are so many physicists doing playing poker for hours on end?” Ouellette asks; and to start finding the answers, she doesn’t have to look further than her husband, Caltech cosmologist and “poker fiend” Sean Carroll. But perhaps the best poker playing physicist right now is Michael Binger: WSOP Main Event 3rd place in 2006, $6.5 million in tournament cashes, and a physics Ph.D. at the Stanford University.

Another one combining poker success and a career in physics is WSOP winner Marcel Vonk, who sums it up pretty nicely in two quotes, “Both physics and poker attract people who like to solve multifaceted problems,” and, “The skills required are similar: mathematical abilities, the ability to spot patterns and predict things from them, the patience to sit down for a long time until you finally achieve your goal, and the ability to say, ‘Oh well’, and start over when such an attempt fails miserably.”

We believe that many players, physicists or not, may find that quote very useful…

It’s also interesting to note that the pioneer of game theory, mathematician John von Neumann was fascinated by poker and namely the art of the bluff, saying, “Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do.”

Read the whole thing on Discover Magazine.

Tagged with:
preload preload preload