One of the world's best human Go players just got pummeled by a computer program. Google DeepMind's AlphaGo program beat champion Go player Lee Sedol in their fifth and final match in Seoul Monday afternoon, bringing their tournament to a close, 4-1.


DeepMind founder Demis Hassabis described the final five-hour match as "the most exciting and stressful one for [the DeepMind team]"at a post-game press conference. At the same conference, Sedol, who only managed to eke out one victory against the AI, said he "felt sorry because the challenge match came to an end."

The human-AI Go face-off started March 9, and has been closely watched as an indicator of how well a computer program can navigate a highly strategic, territory-based game. The players were competing for a $1 million prize; technically, the DeepMind team and AlphaGo won the prize, which it will donate to charity, after the third match. AlphaGo beat Sedol in the first three games, and resigned the fourth.


AlphaGo's win is a huge achievement because Go is so complex—with exponentially more possible games than even chess—and often considered "intuitive," making it a particularly tempting test for AI researchers.

Last year AlphaGo beat European Go champion Fan Hui 5-0, which was reported in Nature in late January. But Sedol has won 18 international titles, and represents a greater achievement, something akin to the 1997 defeat of chess Grandmaster Gary Kasparov by IBM's Deep Blue.

Google says the first game was watched by 60 million people from China, where Go originated millennia ago and remains very popular. The game, which is deceptively simple, is based around placing stones on a 19x19 board (sometimes smaller) in order to capture as much of the area on the board as possible.


What's particular interesting, as I wrote after AlphaGo's first two wins, is that AlphaGo was not simply playing "like a machine," by making the most efficient moves based on brute force computing, nor did it play quite like a human. Instead, its style of play looked like thinking without human hangups.

In fact, AlphaGo doesn't even work with brute force, because the sheer number of possible moves and games it would have to calculate is so large. Rather, it uses neural networks to pare down its decisions, and makes a judgment among those about what to do.


Nonetheless, there's room for improvement. The program made mistakes early in some of its games with Sedol. "Yes, compared to human beings, its moves are different and at times superior," said Sedol after his win. "But I do think there are weaknesses for AlphaGo.”

However, if the goal is a thinking, human-like intelligence, perhaps that's a feature and not a bug.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at