With all the terrifying Terminator talk around artificial intelligence, you would hope someone would come up with a viable plan for keeping humanity safe from the potential dangers of smart machines. Fortunately, the The New York Times reports, it appears that's now happening.

A group including Google parent company Alphabet, Amazon, Facebook, IBM and Microsoft, is working on a standard of ethics for AI in the face of concerns about job automation and AI-enabled weapons.

The effort still seems a little murky, but seems to be coming into focus per the report in the Times, which describes "a tentative plan to announce the new organization in the middle of September."

If we are to speculate what efforts might look like, the place to start may be a Stanford study referenced by the Times, called the One Hundred Year Study on Artificial Intelligence. The study involves reporting on the impact of AI every five years for the next century. The effort is no secret, and a report explaining it is available online.

"The Stanford report attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life, including health care, education, entertainment and employment, but specifically do not look at the issue of warfare. They said that military A.I. applications were outside their current scope and expertise, but they did not rule out focusing on weapons in the future," reports the Times.

The report presents three policy recommendations for preparing the world for AI:

"1. Define a path toward accruing technical expertise in AI at all levels of government. Effective governance requires more experts who understand and can analyze the interactions between AI technologies, programmatic objectives, and overall societal values.

2. Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.

3. Increase public and private funding for interdisciplinary studies of the societal impacts of AI."

Basically: Train government officials to understand AI technologies before we find that the law and technology are out of step, make it easy for academics to study AI and fund more research. The machines are rising whether we like it or not, so we better figure out what that means.