The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.
And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.
It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).
Complex Choices
Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.
Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.
Turns out general opinion was wrong.
Self-Taught
By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.
And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:
“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”
When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.
Neural Networks: Patience Pays Off
The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep Learning, Geoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.
It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.
Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).
Expert Systems: Data Triumphs
Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).
To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.
But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.
Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.
Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.
Learning By Intuition & Iteration
Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.
But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.
Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.
It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.
And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.