Stephen Hawking once said "artificial intelligence could spell the end of the human race." This week, he announced his plans to play some role in making sure that doesn't happen.

The famed physicist opened an AI research center at Cambridge University on Wednesday, according to the Guardian. The institute will bring together researchers from a range of fields to address some of the biggest questions about the technology.

Funded by a $12.3 million grant from the Leverhulme Trust, the Leverhulme Centre for the Future of Intelligence is a collaboration between Cambridge, Oxford, Imperial College London, and Cal-Berkeley. The researchers will work on projects related to how AI can be used for good, how it should be regulated, and its long-term implications across many disciplines.

Hawking clearly believes in AI's potential. "Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one--industrialization," he said at the center's opening, according to the Guardian. "And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI could the biggest event in the history of our civilization."

But, as he has done in the past, Hawking also expressed big concerns over how AI is used. "Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many," he said. "It will bring disruption to our economy. And in the future, AI could develop a will of its own--a will that is in conflict with ours."

In January 2015, Hawking, Elon Musk, Bill Gates and dozens of other tech experts penned an open letter warning about the dangers of AI. In it, they wrote that the technology can be greatly beneficial, but that humans must remain in full control of it.

Musk, Peter Thiel, Sam Altman and a handful of others invested $1 billion last year to form OpenAI, a nonprofit company that open sources AI with the intention of fueling its growth in a safe way. Earlier this year, the organization revealed it was having its computers read months' worth of Reddit threads to understand human language and interactions.

Google, which Musk previously has implied is the one company whose AI he fears could become too powerful too quickly, outlined plans earlier this year for developing a kill switch that would shut down an out-of-control AI system.

And in August, Google parent company Alphabet joined Facebook, Amazon, Microsoft, and IBM to announce the formation of a combined effort to establish a code of ethics for AI that could be applied to weaponry and job automation.

Hawking's new institute is born out of Cambridge's Centre for Existential Risk, which studies the biggest threats to human extinction, including global warming, disease, and overpopulation.

Published on: Oct 21, 2016