One small move for a computer; one giant leap for computerkind.
Google announced yesterday that artificial intelligence (AI) software it created, hosted on a network of computers, has done what no software ever could: It has beaten a top player at Go. So what, you may wonder? Computers have already roundly defeated the world's best players at chess, checkers, Othello, Scrabble, and--most entertainingly--Jeopardy! What's the big deal about a computer beating a human at yet another board game?
It's a very big deal. In chess, for instance, IBM's Deep Blue beat Garry Kasparov through "brute force"--by rapidly considering every possible outcome of every possible move on the board and calculating which gave it the best odds of success.
But brute force calculation does not work with Go because the number of calculations required is too large even for the fastest computers. This 2,500-year-old game has rules that are infinitely simpler than those in chess--each move consists of placing one stone at any open intersection on a 19 x 19 grid, with the object of surrounding and thus capturing your opponent's stones while avoiding the capture of your own. But those rules lead to an exponentially more complex game. While chess presents players with an average 35 possible moves at any given time, Go presents players with an average 250. Each of which leads to another 250, and so on.
Thus, successful human players do not win by calculating the outcomes of their various possible moves, as they would in chess. Instead, they seem to intuit the next move, using pattern recognition, something that humans have always done better than computers. (This is why the person standing next to you can nearly always understand what you're saying better than Siri can.)
I saw this dynamic in action when I attended an AI conference some years ago. One after another, I watched rueful human high-level players get beaten--and sometimes trampled--by computers at chess, checkers, Othello, and Scrabble. Then the Go champion stepped up. Although she was not a world champion or anything like one, she easily beat the computer in game after game. Finally, she offered it a multi-move handicap, and the computer spread a star-shaped pattern of black pieces across the otherwise empty board. It was a striking visual expression of her disdain for the machine, as a journalist standing next to me commented. She still went on to win.
Can AlphaGo dominate?
But that was then. This month, Google's AI player, named AlphaGo, beat European Go champion Fan Hui in five out of five games during a private match. Next, Google researchers hope AlphaGo can beat Lee Sodol, who has won the most international Go titles of any player in the past decade, in public. Whether AlphaGo beats Sodol or not, its achievements so far are an impressive demonstration of the power of AI to reproduce the learning process of the human brain, only much, much faster. Google won the match by teaching AlphaGo all the moves from countless high-level Go games, some 30 million in all. That alone would not have allowed it to beat human players, so then engineers took the crucial next step by having AlphaGo play millions of games against itself, learning as it went along.
That ability to learn, recognize patterns, and make intuitive leaps is what finally beat a human champion, and why computer scientists find this development so exciting. The same kind of learning could help scientific researchers find breakthroughs, or create strategies for investing--or for winning wars. Not surprisingly, it is this same learning ability that frightens Elon Musk and others who've publicly expressed concerned about the long-term implications of AI.
You may love AI, or you may fear it. Either way, it just got a whole lot closer to thinking like you do.