SEOUL, SOUTH KOREA — In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.

But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine—no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years the come, as we humans work with these machines, our genius will only grow in tandem with our creations.

This week saw the end of the historic match between Lee Sedol, one of the world’s best Go players, and AlphaGo, an artificially intelligent system designed by a team of researchers at DeepMind, a London AI lab now owned by Google. The machine claimed victory in the best-of-five series, winning four games and losing only one. It marked the first time a machine had beaten the very best at this ancient and enormously complex game—a feat that, until recently, experts didn’t expect would happen for another ten years.

The victory is notable because the technologies at the heart of AlphaGo are the future. They’re already changing Google and Facebook and Microsoft and Twitter, and they’re poised to reinvent everything from robotics to scientific research. This is scary for some. The worry is that artificially intelligent machines will take our jobs and maybe even break free from our control—and on some level, those worries are healthy. We won’t be caught by surprise.

But there’s another way to think about all this—a way that gets us beyond the trope of human versus machine, guided by the lessons of those two glorious moves.

Move 37

With the 37th move in the match’s second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world’s best Go players, including Lee Sedol. “That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other. Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui—the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none—reacted with incredulity. But then, drawing on his experience with AlphaGo—he has played the machine time and again in the five months since October—Fan Hui saw the beauty in this rather unusual move.

Indeed, the move turned the course of the game. AlphaGo went on to win Game Two, and at the post-game press conference, Lee Sedol was in shock. “Yesterday, I was surprised,” he said through an interpreter, referring to his loss in Game One. “But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.”

It was a heartbreaking moment. But at the same time, those of us who watched the match inside Seoul’s Four Seasons hotel could feel the beauty of that one move, especially after talking to the infectiously philosophical Fan Hui. “So beautiful,” he kept saying. “So beautiful.” Then, the following morning, David Silver, the lead researcher on the AlphaGo project, told me how the machine had viewed the move. And that was beautiful too.

One in Ten Thousand

Originally, Silver and his team taught AlphaGo to play the ancient game using a deep neural network—a network of hardware and software that mimics the web of neurons in the human brain. This technology already underpins online services inside places like Google and Facebook and Twitter, helping to identify faces in photos, recognize commands spoken into smartphones, drive search engines, and more. If you feed enough photos of a lobster into a neural network, it can learn to recognize a lobster. If you feed it enough human dialogue, it can learn to carry on a halfway decent conversation. And if you feed it 30 million moves from expert players, it can learn to play Go.

But then the team went further. Using a second AI technology called reinforcement learning, they set up countless matches in which (slightly) different versions of AlphaGo played each other. And as AlphaGo played itself, the system tracked which moves brought the most territory on the board. “AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving,” Silver said when Google unveiled AlphaGo early this year.

Then the team took yet another step. They collected moves from these machine-versus-machine matches and fed them into a second neural network. This neural net trained the system to examine the potential results of each move, to look ahead into the future of the game.

So AlphaGo learns from human moves, and then it learns from moves made when it plays itself. It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game. This is what happened with Move 37. As Silver told me, AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times—and looked ahead in the future of the game—it decided to make the move anyway. And the move was genius.

GW20160134040.jpg Geordie Wood for WIRED

Move 78

Lee Sedol then lost Game Three, and AlphaGo claimed the million-dollar prize in the best-of-five series. The mood inside the Four Seasons dipped yet again. “I don’t know what to say today, but I think I will have to express my apologies first,” Lee Sedol said. “I should have shown a better result, a better outcome, a better contest in terms of the games played.”

In Game Four, he was intent on regaining some pride for himself and the tens of millions who watched the match across the globe. But midway through the game, the Korean’s prospects didn’t look good. “Lee Sedol needs to do something special,” said one commentator. “Otherwise, it’s just not going to be enough.” But after considering his next move for a good 30 minutes, he delivered something special. It was Move 78, a “wedge” play in the middle of the board, and it immediately turned the game around.

As we found out after the game, AlphaGo made a disastrous play with its very next move, and just minutes later, after analyzing the board position, the machine determined that its chances of winning had suddenly fallen off a cliff. Commentator and nine dan Go player Michael Redmond called Lee Sedol’s move brilliant: “It took me by surprise. I’m sure that it would take most opponents by surprise. I think it took AlphaGo by surprise.”

Among Go players, the move was dubbed “God’s Touch.” It was high praise indeed. But then the higher praise came from AlphaGo.

Korean news anchor reporting from the match. Korean news anchor reporting from the match. Geordie Wood for WIRED

One in Ten Thousand—Again

The next morning, as he walked down the main boulevard in Sejong Daero just down the street from the Four Seasons, I discussed the move with Demis Hassabis, who oversees the DeepMind Lab and was very much the face of AlphaGo during the seven-day match. As we walked, the passers-by treated him like a celebrity—and indeed he was, after appearing in countless newspapers and on so many TV news shows. Here in Korea, where more than 8 million people play the game of Go, Lee Sedol is a national figure.

Hassabis told me that AlphaGo was unprepared for Lee Sedol’s Move 78 because it didn’t think that a human would ever play it. Drawing on its months and months of training, it decided there was a one-in-ten-thousand chance of that happening. In the other words: exactly the same tiny chance that a human would have played AlphaGo’s Move 37 in Game Two.

The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But at the same time, it’s flawed. It can’t do everything we humans can do. In fact, it can’t even come close. It can’t carry on a conversation. It can’t play charades. It can’t pass an eighth grade science test. It can’t account for God’s Touch.

But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.

This isn’t human versus machine. It’s human and machine. Move 37 was beyond what any of us could fathom. But then came Move 78. And we have to ask: If Lee Sedol hadn’t played those first three games against AlphaGo, would he have found God’s Touch? The machine that defeated him had also helped him find the way.

See original article:

In Two Moves, AlphaGo and Lee Sedol Redefined the Future