Artificial intelligence is getting smarter--and more pervasive--by the day: Facebook recently announced that it's developing a way for its bots to not only read your private messages, but understand their meaning; Slack is on its way to having its chat feature answer questions about your job.
And with that growing intelligence, of course, comes the growing fear that the machines will eventually rise up and end us all. That's why Google has outlined a plan for a kill switch that can stop algorithms before they get out of control--or at least, too out of control.
If there's one company that should be concerned about its creations outsmarting humanity, it's Google. Known AI-phobic Elon Musk recently implied Google is the one company whose machines he fears might get too smart. The company's robot made headlines recently when it handily beat the world's best Go player in a head-to-head series. Google Magenta composed its first song, a 90-second piano medley. And just for fun, Google's AI has been crushing 80s video games at speeds unseen by humans.
The proposed solution, described in a paper co-authored by Laurent Orseau from Google's DeepMind and Stuart Armstong of Oxford's The Future of Humanity Institute, would "prevent the agent from continuing a harmful sequence of actions--harmful either for the agent or for the environment--and lead the agent into a safer situation."
And it's not as easy as installing an "off" switch, since any robot worth its weight in copper wire will figure out a way to override it. Instead, the command will need to "not appear as being part of the task at hand--which means the robot must think it's deciding to turn itself off instead of obeying the orders of a pesky human.
The authors playfully refer to this key switch as a "big red button."
In April, Princeton bioethics professor Peter Singer wrote about a similar kill switch after Microsoft's Tay chatbot became a hate-spewing racist and had to be shut down within 24 hours of going live. "It will not always be as easy to turn off an intelligent machine as it was to turn off Tay," Singer wrote. "It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world."
Artificial intelligence fears have mostly proven unfounded, and more like the stuff of sci-fi movies and futuristic novels. So far, the only outsmarting the machines have done is beat us at games--and in one case, learn to pause Tetris to avoid losing. In May, Singularity University founder Peter Diamandis spoke out against placing regulations on artificial intelligence. And a 2014 survey conducted by Oxford professor and Superintelligence author Nick Bostrom found that even AI experts believe machines only have a 50/50 chance of reaching human levels of intelligence by the 2040s.
But self-learning machines are concerning enough to draw the attention of big thinkers like Musk--he referred to AI as "our biggest existential threat"--as well as Stephen Hawking and Bill Gates, with whom Musk co-authored a letter about the technology's dangers. Together, the trio helped launch OpenAI, a nonprofit that open sources AI findings. The company's stated goal is to ensure that artificial intelligence is used for good.
Even when used for good and creative purposes, though, AI gives off some spooky signs. A machine in Japan almost won a literary prize for a full length novel it wrote--which it chose to end with the sentence, "The computer, placing priority on the pursuit of its own joy, stopped working for humans."
Hence, the big red button. Just in case.