Elon Musk has emerged as a leading voice in speaking out on the potential dangers of artificial intelligence, going so far as to call it the “biggest existential threat” to humanity.
The Tesla and SpaceX CEO has quipped that an AI spam filter program could theoretically determine that the best way to get rid of spam is to get rid of the humans that create it. Last month, his signature appeared on an open letter to world military powers warning that weapons technology using artificial intelligence would almost certainly result in a global arms race.
In a panel discussion in Silicon Valley Saturday, however, Musk took a more measured tone on AI, encouraging members of the audience to think about how to prevent it from progressing along a potentially apocalyptic path.
“It’s definitely going to happen. So if it’s going to happen, what’s the best way for it to happen?” he asked as part of a panel on super intelligence taking place on the Google headquarters campus.
Those developing AI need to avoid being cavalier, he said. And – as he has often said previously – he said the general public needs to be aware that AI is developing faster than many believe.
“Just because somebody hasn’t experienced or isn’t deeply familiar with an advanced technology doesn’t mean it doesn’t exist,” he said.
One of the best ways to prevent artificial intelligence from harming humans might be to shape the concept of AI in such a way that harm seems antithetical to the definition of the technology, University of California - Berkeley computer science professor Stuart Russell suggested.
Russell, who was on the panel, gave bridges as an example: The idea that a bridge is supposed to be structurally sound is part of the concept of a bridge.
“It should be like that for AI,” he said. Everyone who works with artificial intelligence should view the purpose of the technology as being to help humanity.
The panelists touched upon disastrous scenarios such as an AI weapon arms race, but emphasized that such situations are not immediate concerns. Russell said he thought media reports had made potential negative consequences of AI seem like they were all but already happening and said references to the Terminator film franchise in descriptions of AI were misleading.
He offered climate change as an analogy to the forecasted negative outcomes of AI: What if people had discussed the potential impacts of using fossil fuels during the 19th century when burning coal was a relatively new practice? “Then that would have been a good time to start thinking about the risk of global climate change. Now it may well be too late,” he said.
The point, said Musk, is to prepare now rather than lapse into the human habit of being reactive and waiting until a tragedy occurs before forming a regulatory body.
“This is where there’s cases where we should be proactive,” he said.
The panel Saturday was part the Effective Altruism (EA) Global conference, which took place over the weekend. Effective altruism has been called “altruism for nerds,” and revolves around the idea that quantitative approaches can be used to determine the best possible use of resources – time and money in particular – to maximize the good those resources can do for humanity on a global scale.