A month ago, we witnessed an incredible achievement in the field of Artificial Intelligence. Google-DeepMind’s computer program AlphaGo defeated the top Go player of the past decade, Lee Sedol, in a $1M five-game challenge match in Seoul.
A win 70 years in the making
It all started in 1950, when Claude Shannon’s paper about chess started the conversation about computers playing thinking games. While initial attempts by computers only reached a weak amateur level, they kept getting better as time went on. It reached a point where computers were starting to be seen as a threat to the best human players.
In 1997, in the second attempt by IBM’s Deep Blue against chess Grandmaster Garry Kasparov, the world champion was beaten. With this, the pinnacle of western thinking games was conquered by computers. Then came the big wait. No one came close to beating the ancient eastern game of Go in the last century.
It was only around 2005 when the Monte Carlo Tree Search (MCTS) got added to Go computer programs, increasing the level of gameplay from weak to strong (still amateur). In the last 10 years, most improvements were incremental and many people thought that it would be another 10 years before computers would pose a threat to the World Champion of Go. There was still a sizable gap between the best Go human players and best Go computer programs.
In March this year, international science journal, Nature, published ‘Google AI algorithm masters ancient game of Go.’ The article suggested that Google’s AlphaGo could match the level of gameplay of a world champion. Amazing!
A deep dive into algorithms
In July last year, I visited the Computer Olympiad and three-day World Computer Chess Championship in Leiden, and spoke with many of the programmers assembled.
It turns out that everyone was using the Monte Carlo Tree Search (MCTS) to dive deep enough and make sense of the positions, at the same time selecting from the many moves in each position. Most research papers about Go computer programs are about making the MCTS better. This has led to incremental improvements, making MCTS the best bet in the last conference on Go computer programs.
AlphaGo on the other hand, has done something different besides using MCTS. It uses neural nets to learn how to recognize quality positions. The neural nets were first trained with many, many positions from publicly available games followed by self-learning ones (a game played against itself). In every move, the resulting “policy network” helps in figuring out what are good moves. The MCTS then does a deep enough search to validate (or overrule) the findings of the evaluation.
The shift towards optimization
I believe that with Go succumbing to the AIs of this world, our perception has evolved towards ‘computer programs can do anything.‘ After all, we’re used to asking our phones for direction, open to the prospect of self-driving cars, and have witnessed computers that can beat humans in thinking games. It’s time the business world kept up.
What are your thoughts on artificial intelligence in business? Do you see potential for self-learning computer programs in business improvement?
Bonus: Take a look at this 15-minute summary of match 5 from commentators Michael Redmond and Chris Garlock.