Print This Post Print This Post 1,895 views
Oct 10

Reported by Neha Okhandiar, Queen Mary University of London News, 21 August 2013.

Certain types of video games can help to train the brain to become more agile and improve strategic thinking, according to scientists from Queen Mary University of London and University College London (UCL).

The researchers recruited 72 volunteers and measured their ‘cognitive flexibility’ described as a person’s ability to adapt and switch between tasks, and think about multiple ideas at a given time to solve problems.

Two groups of volunteers were trained to play different versions of a real-time strategy game called StarCraft, a fast-paced game where players have to construct and organise armies to battle an enemy. A third of the group played a life simulation video game called The Sims, which does not require much memory or many tactics. Continue reading »

Tagged with:
Print This Post Print This Post 1,707 views
Jan 08

Reported by Brian Owens (on behalf of Eugenie Samuel Reich), in Nature blogs, 06 Jan 2012.

An Irish mathematician has used a complex algorithm and millions of hours of supercomputing time to solve an important open problem in the mathematics of Sudoku, the game popularized in Japan that involves filling out a 9X9 grid of squares with the numbers 1-9 according to certain rules.

Gary McGuire of University College Dublin shows in a proof posted online 1 January that the minimum number of clues – or starting digits – needed to complete a puzzle is 17 (see sample puzzle, pictured, from McGuire’s paper), as puzzles with 16 clues or less do not have an unique solution. In comparison most newspaper puzzles have around 25 clues, with the difficulty of the puzzle decreasing as more clues are given.

Continue reading »

Tagged with:
Print This Post Print This Post 1,649 views
Feb 21

Reported by Nicola Jones, in Nature News, 15 February 2011, updated 17 February 2011.

TV star Watson is a step towards a new kind of search engine.

A superstar supercomputer can carry out powerful searches that may one day be of help to scientists. AP/Press Association Images

IBM’s supercomputer Watson is going up against top players of the US television quiz programme Jeopardy! this week, stirring up excitement in the artificial-intelligence community and prompting computer science departments across the country to gather and watch.

“It is, in my mind, a historic moment,” says Oren Etzioni, director of the Turing Center at the University of Washington, Seattle. “I watched Gary Kasparov playing Deep Blue. This absolutely ranks up there with that.”

Jeopardy! contestants are given clues in the form of answers and must try to work up the right questions. Watson, with its 16-terabyte memory, is capable of tackling normal Jeopardy! clues — including all the puns, quips and ambiguities they typically contain. It dissects the clue, compares it against a ream of facts and rules that it has gleaned from reading a raft of books (from encyclopaedias to the complete works of Shakespeare), and assigns probabilities to its answers before coming up with a response.

In the time it takes host Alex Trebek to read the clue to the human contestants, Watson comes up with an answer and decides whether it is confident enough to ring in. The match, which was filmed in advance in January, airs 14–16 February (see the programme’s Watson website).

Although it might sound like nothing more than a stunt, computer scientists say that Watson is an important advance in artificial intelligence, marking a shift that will create much better search engines and help scientists to keep up in their fields.

What Watson doesn’t do is attempt to mimic the human ability to use common sense, make leaps of logic or imagine the future, notes Patrick Winston of Massachusetts Institute of Technology in Cambridge. As a result, it has failed to capture his interest. “I’m planning to go to bed early. I’ll watch the re-runs,” he says.

Deep thought

The computer system is based on IBM’s DeepQA project, which aims to answer ‘natural-language’ questions in standard English, such as ‘Which nanotechnology companies are hiring on the West Coast?’ The trick is for it to both understand that type of query and provide a meaningful answer. “Good luck getting that from Google,” says Etzioni.

That goal is at the core of many computer science-endeavours, including the long-running artificial-intelligence project Cyc, started in 1984 and now run by Cycorp in Austin, Texas. One trouble with Cyc, says Etzioni, is that its database relies on human beings typing coded facts and knowledge into its system. The alternative is to train computers to learn by reading. There are several big projects in the works on this front — including the Never-Ending Language Learning system (NELL) at Carnegie Mellon University in Pittsburgh, Pennsylvania, and Etzioni’s KnowItAll system — most of which are part-funded by the US Defense Advanced Research Projects Agency.

“I watched Gary Kasparov playing Deep Blue. This absolutely ranks up there with that.”

Oren Etzioni
University of Washington, Seattle

What IBM has done that’s different, says Etzioni, is to focus on a very specific situation (the game of Jeopardy!), spend a lot of time on how to interpret cunning clues, create a database that is the equivalent of about a million books, and find some way to get the system’s performance to shoot up — it comes up with answers in seconds. IBM hasn’t released all the details of how Watson works, so how they have done this is not clear. But Etzioni guesses that the way it collates facts from its reading is similar to how his KnowItAll system approaches the problem. It’s impressive, notes Etzioni, how DeepQA has managed to basically achieve what Cyc set out to do decades ago, but in just a few years.

“Just like with Deep Blue, it’s really bringing together the state-of-the-art in hardware and software,” says Henry Kautz, a computer scientist at the University of Rochester, New York, and president of the Association for the Advancement of Artificial Intelligence in Menlo Park, California, who is also impressed by Watson.

Etzioni says he expects natural-language software to make a big dent in search applications over the next five years, although at the moment systems such as Watson aren’t ready for ‘prime time’: he notes that Microsoft bought a natural-language processing company called Powerset in 2008 for US$100 million, “but you don’t see Microsoft using it in any visible way”. Kautz agrees that systems as broad and powerful as Watson could be available for general use “surprisingly soon… Let’s say three to four years.”

Crying for help

Etzioni argues that a search engine that can deal with natural-language queries is necessary for scientists trying to keep up with the mass of knowledge now being generated in their field, so they can ask, say, “What are the top ten genes currently being studied in cancer research?”, rather than having to trawl through the literature to find out.

Others disagree. Canadian writer Malcolm Gladwell said in a recent discussion about the future of search technologies that current projects “are solving lots of problems that aren’t really problems… You cannot point to any area of intellectual activity or innovation or what have you that is today being compromised or hamstrung by some failure in their search technology. Can we honestly go to some scientist and say the reason we can’t cure cancer is you don’t have access to information about cancer research? No!”

Etzioni laughs at that. “To me, that’s as short-sighted as the famous statement that there’s only a world market for five computers,” he says — a statement that, ironically, is attributed to IBM founder Thomas Watson, after whom Watson is named.

“There’s massive production of knowledge, particularly in the biological community, and researchers can’t keep up with it,” says Etzioni. “Applying these tools specifically for medical researchers to keep track of what’s relevant in what they’re interested in is a huge area of my field. It’s true we don’t yet have a killer app, but you talk to anybody and they’re crying out for help.”

Updated:

Watson won against the human contestants. IBM plans to donate their US$1 million winnings to charity. The final scores were: Watson $77,147, Ken Jennings $24,000 and Brad Rutter $21,600.

Read more in Nature | doi:10.1038/news.2011.95 as well as in Designing a computer that can process and understand natural language.

Tagged with:
Print This Post Print This Post 1,396 views
Oct 30

Reported by Rowan Hooper, news editor in NewScientist, 12 October 2010

A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time. No big deal, you might think. After all, computers have been beating humans at western chess for years, and when IBM’s Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity.

That hasn’t happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.

The Mainichi Daily News reports that top women’s shogi player Ichiyo Shimizu took part in a match staged at the University of Tokyo, playing against a computer called Akara 2010. Akara is apparently a Buddhist term meaning 10224, the newspaper reports, and the system beat Shimizu in six hours, over the course of 86 moves.

Japan’s national broadcaster, NHK, reported that Akara “aggressively pursued Shimizu from the beginning”. It’s the first time a computer has beaten a professional human player.

The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu’s defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.

Perhaps the association doesn’t mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player. Meanwhile, humans will have to face up to more flexible computers, capable of playing more than just one kind of game.

And IBM has now developed Watson, a computer designed to beat humans at the game show Jeopardy. Watson, says IBM, is “designed to rival the human mind’s ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers”. IBM say they have improved artificial intelligence enough that Watson will be able to challenge Jeopardy champions, and they’ll put their boast to the test soon, says The New York Times.

I’ll leave you with these wise and telling words from the defeated Shimizu: “It made no eccentric moves, and from partway through it felt like I was playing against a human,” Shimizu told the Mainichi Daily News. “I hope humans and computers will become stronger in the future through friendly competition.”

Read more in NewScientist blogs

Tagged with:
Print This Post Print This Post 1,459 views
Oct 23

Science magazine Discover has published a nice article on poker called Big Game Theory. While the piece, written by Jennifer Ouellette, is aimed at people who are not familiar with the game, it does contain some enlightening parts for experienced players too.

“What are so many physicists doing playing poker for hours on end?” Ouellette asks; and to start finding the answers, she doesn’t have to look further than her husband, Caltech cosmologist and “poker fiend” Sean Carroll. But perhaps the best poker playing physicist right now is Michael Binger: WSOP Main Event 3rd place in 2006, $6.5 million in tournament cashes, and a physics Ph.D. at the Stanford University.

Another one combining poker success and a career in physics is WSOP winner Marcel Vonk, who sums it up pretty nicely in two quotes, “Both physics and poker attract people who like to solve multifaceted problems,” and, “The skills required are similar: mathematical abilities, the ability to spot patterns and predict things from them, the patience to sit down for a long time until you finally achieve your goal, and the ability to say, ‘Oh well’, and start over when such an attempt fails miserably.”

We believe that many players, physicists or not, may find that quote very useful…

It’s also interesting to note that the pioneer of game theory, mathematician John von Neumann was fascinated by poker and namely the art of the bluff, saying, “Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do.”

Read the whole thing on Discover Magazine.

Tagged with:
Print This Post Print This Post 1,113 views
Oct 22

Reported by John Bohannon in Science Now

People playing a simple video game can match, and even surpass, the efforts of a powerful supercomputer to solve a fiendishly difficult biological problem, according to the results of an unusual face-off. The game isn’t Pac-Man or Doom, but one called FoldIt that pushes people to use their intuition to predict the three-dimensional (3D) structure of a protein.

When it comes to solving protein structures, scientists usually turn to x-ray crystallography, in which x-rays shining through a protein crystal reveal the location of atoms. But the technology is expensive and slow and doesn’t work for all proteins. What scientists would love is a method for accurately predicting the structure of any protein, while knowing nothing more than the sequence of its amino acids. That’s no small task, considering that even a moderately sized protein can theoretically fold into more possible shapes than there are particles in the universe.

To get around that problem, computer programs focus on which shapes require the least amount of energy—and thus which ones the protein is most likely to adopt. But these programs must rely on intense computing to make any headway. One of the most powerful, Rosetta@home, was created by David Baker, a molecular biologist at the University of Washington (UW), Seattle. The program distributes its calculations to thousands of home computers around the world, automatically sending the results back to Baker’s lab. (It runs on the same “distributed computing” architecture as the SETI@home search for alien life.) The entire network is capable of nearly 100 trillion calculations per second, dwarfing most supercomputers.

Two years ago, Baker wondered whether humans might help Rosetta@home do better. Although the program is impressively good at solving the first 95% of the folding of a protein, putting the correct finishing touches on a molecule often stumps it. People complained to Baker by e-mail that it was frustrating to watch the program flail around on their computer screens when the necessary final tweaks were sometimes obvious to the human eye.

So Baker teamed up with UW Seattle computer scientist Zoran Popović to turn protein folding into FoldIt, a relatively simple video game where people grab, poke, and stretch a 3D model of a protein, seeking to score more points by minimizing the protein’s total energy. To evaluate this “human computing” strategy, Baker recently challenged FoldIt players to perform the final folding of 10 proteins. The true structure of each had been solved with x-ray crystallography, but the results were not yet released.

It was a close battle: The predicted structures from FoldIt players were closer to reality than those from Rosetta@home for five of the 10 protein structures. They were also significantly more accurate, Baker’s team reports in tomorrow’s issue of Nature.

FoldIt has been a runaway hit, downloaded and played by over 100,000 people since it was released in May 2008. Baker and Popović are now studying the top players in the hopes of teaching computers the humans’ tricks.

When the game debuted, Arthur Olson, a molecular biologist at the Scripps Research Institute in San Diego, California, told Science that he doubted that nonscientist players could get very far. “I’m thrilled to be wrong,” he now says. “What I didn’t know is that this game would actually create experts.” Olson expects that computers will eventually learn enough from the human players to beat them, as IBM’s supercomputer Deep Blue did with chess. But because human spatial reasoning is still so much better, he says, “computers still have a long way to go.”

Tagged with:
preload preload preload