Correction: The original version of this column was based on inaccurate reporting and hype that has been debunked. The column inaccurately portrayed the outcome of a Facebook chatbot research project and the company’s reason for ending it. That bots use their own language is expected, not unanticipated by experts. The project was ended not out of alarm but because the researchers hadn’t designed them to communicate in a way comprehensible to humans, not bot shorthand. In addition, the earlier version also incorrectly said that the reports on this research broke after Facebook CEO Mark Zuckerberg and Tesla CEO Elon Musk engaged in a public debate over AI; the debate came after the news reports.
The golden egg that won the war.
World War II was about deception. Communications by each side were encrypted to keep the enemy in the dark about strategy and tactics. Were it not for the famous British mathematician Alan Turing, credited with building the computer that broke the German's Enigma code, the outcome of the war may have been completely different. In fact, Churchill called Turing and his team "the geese that laid the golden eggs," citing their work as the single biggest contribution to the Allied victory.
Turning established the most basic architecture of computing, the Turing machine, which defined how computers worked for nearly a century--they are programmed to follow a predetermined set of rules, or code. The computers don't come up with the rules. They just execute them much faster.
If a computer made a mistake, it was due to a human having incorrectly coded the computer's response. It was all very linear and predictable. Think of the classic "if this, then that" logic that every set of interactions is built on.
But that may have just changed in a frighteningly dramatic way that recently came to light in A.I. work being done by Facebook.
But before we get into what that was, a little context.
A war of words.
Elon Musk and Mark Zuckerberg recently engaged in a bit of digital sparring. In a Facebook Live broadcast, Zuckerberg said about A.I., "I have pretty strong opinions on this. I am optimistic ... and I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible." Musk later responded, via Twitter, saying that Zuckerberg's "understanding of the subject is limited."
Musk and many other well-known figures in tech and science, such as Stephen Hawking and Bill Gates, have been very vocal about their concern that A.I. could quickly evolve to pose the single biggest threat to humanity that we have ever faced.
Part of that threat is based on A.I.'s ability to quickly learn and evolve from simply observing patterns in machines, the environment, and even our behavior. Just look at how Google DeepMind's AlphaGo recently beat the world's best Go player. Unlike chess, which has a large but relatively finite set of moves, Go has more potential moves than there are atoms in the known universe.
Winning at Go is about more than brute force and rigid machine logic. It's also about some degree of intelligence that involves gut and intuition. While those are qualities we attribute only to humans, it may well turn out that a sufficiently advanced A.I. can exhibit behaviors that at least look like they are based on more than just a given set of rules and at worst (or is that "at best"?) actually exhibit all the hallmarks of human intuition.
What this means is that A.I. can evolve in ways that are invisible to us. In other words, we don't necessarily have any idea why a program or a device powered by A.I. is doing what it's doing, because its decisions and actions go far beyond what it was told to do.
That's exactly what Facebook's bots did in a way that so spooked engineers they shut them down.
Can you hear me now?
The bots, which were used to conduct dialogs in a chat with humans, were trained to communicate with humans and with each other in English. Yes, the bots actually talk to each other to collaborate in solving problems faster by creating new rules.
However, in a turn of events nobody had anticipated, the bots suddenly started to communicate with each other in their own language--a language they made up on the fly and entirely on their own.
The science fiction-like scenarios this conjures up are right out of an episode of Star Trek, but this wasn't a Hollywood script that was being played out. It was a real world example of A.I. evolving faster than its human creators could keep pace with.
To call the A.I. bots deceptive, or to imply that the bots had human-like intentions, may certainly be stretching it, but at the very least they were finding tools to deal with their environment that no human could comprehend.
It's ironic that all of this happened so closely on the heels of the well publicized debate between Musk and Zuckerberg about the risks of A.I. But it does shed light on how quickly A.I. can extend beyond our reach to control or comprehend it.
It's also poetic to consider that while Turing may be the father of modern computing, he is also known for a quote that may prove to be much more prescient: "A computer would deserve to be called intelligent if it could deceive a human into believing that it was human."
Were we just deceived?