Print This Post Print This Post 1,445 views
Nov 07

Reported by Jesse Emspak, New Scientist, 4 November 2011.

Chalk up another security measure that hackers can break.

The encryption protocol called Triple-Data Encryption Standard, or 3DES is  supposed to be unbreakable – at least not without a lot of computing time and power. Because of this, lots of contactless smart cards – London’s Oyster Card, as well as cards used to store money and passes for mass transit systems in Chicago, Seattle and elsewhere – rely on 3DES to protect users’ accounts.

But Christof Paar at Ruhr-University Bochum has led a team that hacked 3DES using a low-cost system to break in with just a few hours of work.

Continue reading »

Tagged with:
Print This Post Print This Post 1,810 views
Oct 28

Reported by Mark Brown, Wired UK, 26 Oct. 2011.

Copiale cipher. (University of Southern California and Uppsala University)

Computer scientists from Sweden and the United States have applied modern-day, statistical translation techniques — the sort of which that are used in Google Translate — to decode a 250-year old secret message.

The original document, nicknamed the Copiale Cipher, was written in the late 18th century and found in the East Berlin Academy after the Cold War. It’s since been kept in a private collection, and the 105-page, slightly yellowed tome has withheld its secrets ever since.

But this year, University of Southern California Viterbi School of Engineering computer scientist Kevin Knight — an expert in translation, not so much in cryptography — and colleagues Beáta Megyesi and Christiane Schaefer of Uppsala University in Sweden, tracked down the document, transcribed a machine-readable version and set to work cracking the centuries-old code.

Continue reading »

Tagged with:
Print This Post Print This Post 6,025 views
Sep 27

Reported by Ed Yong, in Nature News (doi:10.1038/news.2011.557), 26 Sep. 2011.

The name's Coli. Escherichia coli.

For millennia, people have written secret messages in invisible ink, which could only be read under certain lights or after developing with certain chemicals. Now, scientists have come up with a way of encoding messages in the colours of glowing bacteria.

The technique, dubbed steganography by printed arrays of microbes (SPAM), creates messages that can be sent through the post, unlocked with antibiotics and deciphered using simple equipment. It is described in the Proceedings of the National Academy of Sciences [1].

For years, scientists have been able to encode messages in biological molecules such as DNA or proteins. Biologist Craig Venter wrote his name, along with several quotations, into the DNA of the partially synthetic bacteria that he unveiled last year [2].

Continue reading »

Tagged with:
Print This Post Print This Post 1,429 views
Sep 11

Reported by Jamie Condliffe, in New Scientist, Sep. 2011.

It is another escalation in the computer security arms race. Software that can uncover all of a person’s online activity could, in the hands of the police, put more sex offenders behind bars – but it may also be exploited to develop new ways of avoiding being caught.

Researchers from Stanford University in California have managed to bypass the encryption on a PC’s hard drive to find out what websites a user has visited and whether they have any data stored in the cloud.

Continue reading »

Tagged with:
Print This Post Print This Post 1,617 views
Dec 14

Reported by Celeste Biever in New Scientist, 13 December 2010.

A competition to find a replacement for one of the gold-standard computer security algorithms used in almost all secure, online transactions just heated up.

The list of possibilities for Secure Hash Algorithm-3, or SHA-3, has been narrowed down to five finalists. They now face the onslaught of an international community of “cryptanalysts” – who will analyse the algorithms for weaknesses – before just one is due to be selected as the winner in 2012.

The competition, which is being run by the US National Institute of Standards and Technology in Gaithersburg, Maryland, is a huge deal for cryptographers and cryptanalysts alike. “These are incredibly competitive people. They just love this,” says William Burr of NIST. “It’s almost too much fun. For us, it’s a lot of work.”

The need for the competition dates back to 2004 and 2005, when Chinese cryptanalyst Xiaoyun Wang shocked cryptographers by revealing flaws in the algorithm SHA-1, the current gold-standard “hash algorithm”, which is relied upon for almost all online banking transactions, digital signatures, and the secure storage of some passwords, such as those used to grant access to email accounts.

Diversity of designs

Hash algorithms turn files of almost any length into a fixed-length string of bits called a hash. Under SHA-1, it was believed that the only way to find two files that produce the same hash would require millions of years’ worth of computing power, but Wang found a shortcut, raising the possibility that online transactions could one day be rendered insecure.

So in 2007, NIST launched a competition to find a replacement.

Submissions closed in 2008, by which time NIST had received 64 entries “of widely varying quality”, says Burr. In July 2009, NIST pruned the list to 14 that warranted further consideration.

Then, on 9 December, he announced that NIST had settled on just five finalists (pdf).

“We picked five finalists that seemed to have the best combination of confidence in the security of the algorithm and their performance on a wide range of platforms” such as desktop computers and servers, Burr told New Scientist.

“We also gave some consideration to design diversity,” he says. “We wanted a set of finalists that were different internally, so that a new attack would be less likely to damage all of them, just as biological diversity makes it less likely that a single disease can wipe out all the members of a species.”

Sponge construction

The finalists include BLAKE, devised by a team led by Jean-Philippe Aumasson of the company Nagravision in Cheseaux, Switzerland; and Skein, the brainchild of a team led by famous computer security expert and blogger Bruce Schneier of Mountain View, California.

All of the finalists incorporate new design ideas that have arisen over the last few years, says Burr.

Hash algorithms start by turning a document into a string of 1s and 0s. Then over multiple cycles these bits are shuffled around, manipulated and either condensed down or expanded out to produce the final string, or hash.

One novel idea, called the “sponge hash construction”, does this by “sucking up” the original document and then later entering a “squeezing state” in which bits are “wrung out” to produce a final hash, Burr says. One of the finalists, an algorithm called Keccak devised by a team led by Guido Bertoni of STMicroelectronics, makes a particular point of using this method .

‘Brilliant woman’

The five teams have until 16 January 2011 to tweak their algorithms. Then there will be a year in which cryptanalysts are expected to attempt to break these algorithms. On the basis of these, and its own analyses, NIST will then choose the winner in 2012.

So will Wang, the cryptanalyst who attacked the initial SHA, be among those attempting to break the algorithms that are left? “We assume she may,” says Burr. “She is certainly a brilliant woman. But we hope to pick something that is good enough that she will fail this time.”

As well as finding a gold-standard algorithm, Burr is excited about the ability of such competitions to further cryptographic knowledge. The idea to use a competition to select the algorithm was inspired by the competition that led to the Advanced Encryption Standard (AES) used by the US government.

“There is a general sense that the AES competition really improved what the research community knew about block ciphers,” says Burr. “I think the same sense here is that we are really learning a lot about hash functions.”

Tagged with:
Print This Post Print This Post 1,385 views
Oct 31

Reported by Bruce Schneier in his blog at June 30, 2010

“For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data — data at rest — it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible — so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that — I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.”

Read more in Schneier on Security

Tagged with:
preload preload preload