Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AlphaGo’s unusual moves prove its AI prowess, experts say

John Ribeiro | March 15, 2016
AlphaGo is seen as higher on the scale in AI than Deep Blue.

Playing against a top Go player, Google DeepMind’s AlphaGo artificial-intelligence program has puzzled commentators with moves that are often described as “beautiful,” but do not fit into the usual human style of play.

Artificial-intelligence experts think these moves reflect a key AI strength of AlphaGo, its ability to learn from its experience. Such moves cannot be produced by just incorporating human knowledge, said Doina Precup, associate professor in the School of Computer Science at McGill University in Quebec, in an email interview.

“AlphaGo represents not only a machine that thinks, but one that can learn and strategize,” agreed Howard Yu, professor of strategic management and innovation at IMD business school.

AlphaGo won three games consecutively against Lee Se-dol last week in Seoul, securing the tournament and US$1 million in prize money that Google plans to give to charities. The program, however, lost the fourth game on Sunday when it made a mistake. Lee has warned that the game has some weaknesses.

The program started as a research project about two years ago to test whether a neural network using deep learning can understand and play Go, said David Silver, one of the key researchers on the AlphaGo project. Google acquired British AI company DeepMind in 2014.

The AI program uses as a guide probable human moves from its ‘policy network,’ consisting of a model of play by human experts in different situations, but may make its own move when its ‘value’ neural network evaluates the possible moves at a greater depth.

Unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins, which helps explain some of its moves, said DeepMind CEO Demis Hassabis.

Go players take turns to place black or white pieces, called “stones,” on the 19-by-19 line grid, to aim to capture the opponent's stones by surrounding them and encircling more empty space as territory.

It was expected that it would take many years for AI systems to beat Go, which is seen as more complex than other popular strategy games like chess and has far higher “branching” or average number of possible moves per turn, Precup said.

“The field of AI is typically benchmarked using complex games and problems, in this case mastering the game of Go,” said Babak Hodjat, cofounder and chief scientist at AI company Sentient Technologies. The AlphaGo win marks “a significant high point” in the complexity of problems that can now be tackled using machine learning, Hodjat said via email.

Go involves high-level strategic choices, such as "which battle do I want to play" or "which area of the board to control", and several battles might be running in parallel, according to Precup. “This kind of reasoning is thought to be a hallmark of human thinking,” wrote Precup. There were earlier attempts at Go programs but these were too weak compared to human players, she added.


1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.