In the future, even your friendly household robot might need a story before powering down at night.
Researchers at the Georgia Institute of Technology say that they have come up with a solution for the potential problem of unruly robots turning on humans. It turns out, it may be as simple as turning robots on to reading, Computer World reports. Why? Reading stories could be the best way to teach them right from wrong.
"We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose," Mark Riedl, associate professor at Georgia Tech's School of Interactive Computing, told Computer World.
For the record, there are no reported cases--yet!--of psychotic robots. But for anyone who's skeptical that machines could ever actually turn against us, it's worth noting that renowned physicist Stephen Hawking has predicted that robots could overtake humans in the next 100 years. If that still sounds too paranoid, here's a more near-term scenario that could lead to robots run amok, as described by Computer World.
"[A] human might tell a robotic assistant to go to the pharmacy and bring back a prescription as soon as possible. The robot could figure that the quickest way to get the prescription is to rob the pharmacy, taking the needed medicine and running. However, by reading stories and learning about appropriate human behavior, the robot would instead know to go to the pharmacy, wait in line and not tarry on the way home."
Reading is not the only way humans plan to prevent unintended harmful acts committed by otherwise friendly robot assistants. Researchers at Tufts University's Human-Robot Interaction Lab have already taught robots how to question certain commands that would have negative consequences--like walking off the edge of a table. Robots can even change their minds after initially refusing a direct order, provided a human can offer a logical reason as to why the robot should override its first response.
The most efficient way to give machines deep capabilities of moral reasoning, however, may involve reading stories, Riedl told Computer World.
"We believe that A.I. has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," he said. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."