"This really is the scariest thing to me," said Elon Musk, referring to artificial intelligence in his keynote for thirty one U.S. governors in Rhode Island. He was speaking at the recent summer meeting of the National Association of Governors. "On the artificial intelligence front, I have exposure to the most cutting-edge artificial intelligence. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."
Musk tried to impress upon the governors just how critical a role government can, and should, have in preventing the random destruction of civilization through artificial intelligence. "One of the roles of government is to ensure the public good, and that things dangerous to the public are addressed," he urged. "AI is a fundamental risk to the existence of civilization in a way that car accidents, airplane crashes, drugs or bad food were not," he put it.
Artificial intelligence is smart enough to kill.
Trying to communicate, he lobbed off several examples about technology that seems harmless now, but has serious destructive potential. For example, Alpha Go, the AI that can now "play the world's top 50 Go players, simultaneously, and crush them all." He also referred to robotics, where "you can see robots that can learn to walk from nothing within hours, way faster than any biological being."
His final scenario was the scariest of all.
Forget super-smart strategic intelligence like AlphaGo and superhuman robotics for a moment. Musk suggested that fake news, fake email and spoofed email would be enough to start a war--that's well within the reach of a smart chatbot.
He riffed off a hypothetical example based on the real Malaysian airliner that was shot down on the Ukrainian Russian border. "If you had an AI where the AI's goal was to maximize the value of a portfolio of stocks, one of the ways to maximize the value would be to go long on defense, short on consumer, start a war," he said. "Hack into the Malaysian Airlines aircraft routing server, route it over a war zone, then send an anonymous tip that an enemy aircraft is flying overhead right now." Boom.
There was an appreciative silence in the room. (See the full Q&A below.)
Musk insists the time is now for the government to take the reins.
He shared a simple plan that any concerned governing body could use to systematically help reduce the threat of AI.
1. Learn about it.
"The first order of business would be to learn as much as possible, understand the nature of the issues, and the remarkable achievements of artificial intelligence," he said.
2. Call for regulation sooner, rather than later.
"AI is a weird case where we need to be proactive in regulation rather than reactive," he said. "By the time we are reactive in regulation, it's too late."
He compared the need for an AI regulatory body to the need for the FAA. While the supervision can be restrictive at times, there's a role for regulators in keeping planes flying and the industry safe. Why? Because competitive market forces push good companies to release technology before it's safe. (And given some of the recalls we all know about, that still happens.)
3. Regulate the release of AI after certain safety standards are met.
"Good companies are racing to build AI--they have to build AI or they will not be competitive," he says. So rather than believing companies won't react to the market incentives, he urges that the market--government in particular--should put regulation in place. And sure, he admits some companies will call it stifling, just like regulating the airways, shipping, and food can be called stifling at times. But it's safer, and it's worth it, he says. "Once there is awareness, people will be extremely afraid, as they should be," he concludes.