Editor's Note: This post has been updated to include Carnegie Mellon University's response.
In previous columns, I've pointed out that what passes for "breakthroughs" in Artificial Intelligence aka AI) are a repackaging of decades-old technology: pattern recognition, rule-based programming and neural nets.
That's why, when confronted with the video above, I decided to "drill down" (as they say in the boardroom) into what's probably the most spectacular of the examples: the poker-playing program from Carnegie Mellon University that bested professionals in tournament play.
I started with the University's original press release. It briefly explained what the program, called Libratus, had accomplished, which was factual enough. Then, however, the press released started sounding, well..., a bit like marketing hype.
According to the press release, the feat was an "historic win" and a "new milestone in artificial intelligence" because it involved bluffing, a seemingly human-like behavior. As one of the programmers put it:
"The best AI's ability to do strategic reasoning with imperfect information has now surpassed that of the best humans."
Uh, really? Curiously and significantly, the programmers were vague on a important question: was Libratus learning, on its own, how to play better or were the programmers reprogramming Libratus each night of the tournament so that it could play better?
The question is important because "strategic reasoning" implies the ability to creatively adapt to constantly changing circumstances. As Wired wryly pointed out:
"Because the machine's play changes so distinctly from day to day, filling any holes in its game, its human opponents are sure that those Carnegie Mellon researchers are altering its behavior as the match goes on. Tuomas Sandholm, the Carnegie Mellon professor who oversees Libratus, declines to say whether or not these tweaks are happening."
The key phrase here is "declines to say." If Libratus were indeed adapting by itself, the programmers who built it would be trumpeting that fact. Indeed, they'd be shoe-ins for the Turing Awards. That the programmers "declined to say" almost undoubtedly means that they--the humans--were doing the creative tweaking, in which case, Libratus is just another computer program rather than something truly "historic."
Here's a thought experiment to assess whether Libratus is "intelligent" in any meaningful way. Suppose, during the tournament, an unexpected announcement was made that "for the next hand 'suicide cards' will be wild." Every human player would immediately use "strategic reasoning" to adapt to the new rules. Libratus, however, would be useless until reprogrammed. By a human.
This is an essential point because the creators of Libratus are claiming in the press release that the technology may someday be useful for "business strategy, negotiation, cybersecurity, physical security, military applications, strategic pricing, finance, auctions, political campaigns and medical treatment planning."
But is that likely to true? Unlike poker, where the rules are the rules, those domains are open-ended. They don't have set rules; they're full of what are correctly called "game changers."
This is not to pooh-pooh the very real accomplishment of creating a program that plays poker at a championship level.
However, the fact that Libratus can win poker tournaments is no indication that AI is any closer to being able to cope with complex, open-ended situations--at least not without humans to keep tweaking the program to match the changing nature of the real world.
UPDATE: In a written response, Carnegie Mellon University professor of computer science Tuomas Sandholm takes exception to the argument raised in this column, calling it "nonsensical." He notes that all algorithms were revealed after the match this past January, pointing to this press release. and this lecture at the 2017 International Joint Conference on Artificial Intelligence in Melbourne. "We have described in detail how the AI changes itself using its algorithmic self-improver module," says Sandholm, who along with PhD student Noam Brown created Libratus. "It was just during the match in January that we refused to say how Libratus works." The Carnegie Mellon team refutes the notion that Libratus would be "useless" until reprogrammed by humans. "Libratus's strategy was not programmed by a human in the first place," Sandholm says. "The strategy was computed algorithmically." He further suggests the technology could easily adjust to open-ended situations in a variety of fields: "We would like to point out that heads-up no-limit Texas hold'em is very complex. A player can face 10^161 different situations, which is more situations than the number of atoms in the universe."