Walloped by AlphaGo

Short & Sweet AI - A podcast by Dr. Peper

Categories:

In 1997 a chess playing computer built by IBM called Deep Blue beat the world chess champion Gary Kasparov. You may wonder why AI researchers are so interested in building a computer system to beat human level games. It’s because it’s a way to test a computer’s abilities and drive a new kind of research that could lead to the next big breakthrough in artificial intelligence. Games are a testbed for AI. That next challenge was Go, an ancient Chinese board game considered to be the most popular game in the world, taught in Chinese schools alongside math. Go is relatively unknown in the Western world but it’s considered to be perhaps the most complex game ever devised by humans. In chess, a player has about 35 possible moves to choose from in a given turn, in Go, it’s around 200. Chess can be thought as a metaphor for a battle. Go is more like a geopolitical war where a move in one corner of the board can ripple everywhere else. The result is that Go players can’t look ahead to the ultimate outcome of each contemplated move, like in chess. The top players use intuition and follow a type of aesthetic which has made it a fascinating game for thousands of years. Experts at DeepMind got to work creating a computer system known as AlphaGo. David Silver, the lead researcher, began with reinforcement learning algorithms but realized something was missing and combined reinforcement learning with deep learning which had deep layered representations of knowledge known as neural networks. This combination created major AlphaGo breakthroughs. In 2016 in a televised event with a 100 million people watching, the world’s best Go player, Lee Sedol from South Korea, played the AlphaGo computer system in five games. Lee Sedol had been Go world champion 18 times and Demis Hassabis, the co founder of DeepMind, explained the match pushed AlphaGo to it’s limits. In one moment in the second game, the audience was transfixed and horrified when AlphaGo made a surprising, unexpected move, now made legendary and referred to as Move 37. It was a move that went against all conventional wisdom used in playing the game. AlphaGo had created a new pattern of playing and came up with a long shot, a move that showed an insight beyond what even the best players could see. The move was later described by Go players as showing intuition and something totally original, it was described as a move of beauty. Lee Sedol rallied and in game 4 he placed the 78th stone on the board in between 2 of Alpha Go’s stones. It’s called a wedge move and it was brilliant and took AlphaGo by surprise, everything it had done up to that point was rendered useless. AlphaGo ultimately lost the game. Like a human, the machine had blind spots. That move was dubbed God’s Touch and although Lee won that game, in the end, Alpha Go prevailed winning 4 games to 1. This was a revolutionary accomplishment for a computer system to beat the world’s best Go player and a decade earlier than expected. The world was stunned. First there was sadness that a computer could beat a Go hero. But then there was another emotion, one of excitement that human players could see more possibilities now in playing the game. Lee Sedol said playing against Alpha Go brought him renewed joy in playing and improved his skills and abilities in a way that playing against other human players had not. He went on to win over a 100 games in a row against human players. In 2017 AlphaGo beat the number one world Go player Kie Je from China and after that DeepMind retired AlphaGo while continuing research in other areas. But interestingly, AlphaGo’s win against Lee Sedol in 2016 was a turning point in China. The Chinese government experienced a “sputnik moment” which convinced them they needed to prioritize and dramatically increase funding for artificial intelligence. The race between the US and China for AI superiority was on. From short and sweet AI, I’m Dr. Peper.