How Google’s AI Viewed the Move No Human Could Understand
SEOUL, SOUTH KOREA — The move didn’t make sense to all the humans packed into the sixth floor of Seoul’s Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn’t make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.
In the second game of this week’s historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by a small team of Google researchers, this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. “That’s a very strange move,” said one commentator, an enormously talented Go player in his own right. “I thought it was a mistake,” said the other. And Lee Sedol, after leaving the match room for a spell, needed nearly fifteen minutes to settle on a response.
Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. “It’s not a human move. I’ve never seen a human play this move,” he said. But he also called the move “So beautiful. So beautiful.” Indeed, it changed the path of play, and AlphaGo went on to win the second game. Then it won the third, claiming victory in the best-of-five match after a three-game sweep, before Lee Sedol clawed back a dramatic win in Game Four to save a rather large measure of human pride.
It was a move that demonstrated the mysterious power of modern artificial intelligence, which is not only driving one machine’s ability to play this ancient game at an unprecedented level, but simultaneously reinventing all of Google—not to mention Facebook and Microsoft and Twitter and Tesla and SpaceX. In the wake of Game Two, Fan Hui so eloquently described the importance and the beauty of this move. Now an advisor to the team that built AlphaGo, he spent the last five months playing game after game against the machine, and he has come to recognize its power. But there’s another player who has an even greater understanding of this move: AlphaGo.
I was unable to ask AlphaGo about the move. But I did the next best thing: I asked David Silver, the guy who led the creation of AlphaGo.
‘It’s Hard to Know Who To Believe’
Silver is a researcher at a London AI lab called DeepMind, which Google acquired in early 2014. He and the rest of the team that built AlphaGo arrived in Korea well before the match, setting up the machine—and its all important Internet connection—inside the Four Seasons, and in the days since, they’ve worked to ensure the system is in good working order before each game, while juggling interviews and photo ops with the throng of international media types.
But they’re mostly here to watch the match—much like everyone else. One DeepMind researcher, Aja Huang, is actually in the match room during games, physically playing the moves that AlphaGo decrees. But the other researchers, including Silver, are little more than spectators. During a game, AlphaGo runs on its own.
That’s not to say that Silver can relax during the games. “I can’t tell you how tense it is,” Silver tells me just before Game Three. During games, he sits inside the AlphaGo “control room,” watching various computer screens that monitor the health of the machine’s underlying infrastructure, display its running prediction of the game’s outcome, and provide live feeds from various match commentaries playing out in rooms down the hall. “It’s hard to know what to believe,” he says. “You’re listening to the commentators on the one hand. And you’re looking at AlphaGo’s evaluation on the other hand. And all the commentators are disagreeing.”
During Game Two, when Move 37 arrived, Silver had no more insight into this moment than anyone else at the Four Seasons—or any of the millions watching the match from across the Internet. But after the game and all the effusive praise for the move, he returned to the control room and did a little digging.
Playing Against Itself
To understand what he found, you must first understand how AlphaGo works. Initially, Silver and team taught the system to play the game using what’s called a deep neural network—a network of hardware and software that mimics the web of neurons in the human brain. This is the same basic technology that identifies faces in photos uploaded to Facebook or recognizes commands spoken into Android phones. If you feed enough photos of a lion into a neural network, it can learn to recognize a lion. And if you feed it millions of Go moves from expert players, it can learn to play Go—a game that’s exponentially more complex than chess. But then Silver and team went a step further.
Using a second technology called reinforcement learning, they set up matches in which slightly different versions of AlphaGo played each other. As they played, the system would track which moves brought the most reward—the most territory on the board. “AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving,” Silver said when DeepMind first revealed the approach earlier this year.
And then the team went a step further than that. They fed moves from these AlphaGo-versus-AlphaGo matches into another neural network, refining its play still more. Basically, this neural network trained the system to look ahead to the potential results of each move. With this training, combined with a “tree search” examines the potential outcomes in a more traditional and systematic, it estimates the probability that a given move will result in a win.
So, in the end, the system learned not just from human moves but from moves generated by multiple versions of itself. The result is that the machine is capable of something like Move 37.
A One in Ten Thousand Probability
Following the game, in the control room, Silver could revisit the precise calculations AlphaGo made in choosing Move 37. Drawing on its extensive training with millions upon millions of human moves, the machine actually calculates the probability that a human will make a particular play in the midst of a game. “That’s how it guides the moves it considers,” Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.
But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. “It discovered this for itself,” Silver says, “through its own process of introspection and analysis.”
Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.
See the article here –