Disobedient robots may sound scary, but even scarier is a robot that follows every human command without fail.

Researchers at Tufts University's Human-Robot Interaction Lab have taught robots how to question commands that would have negative consequences--like walking off the edge of a table--an important step in the development of artificial intelligence. Robots can even change their minds after initially refusing a direct order, provided a human can offer a logical reason as to why the robot should override its first response.

To see one of Tufts's researchers convince a robot to follow his command after the robot's initial refusal, check out the video below.

While there are obvious benefits to developing increasingly sophisticated A.I. technology, there are also some not-so-obvious risks. Of particular concern for Tufts professor Matthias Scheutz is what happens when humans begin developing relationships with so-called "social robots" used in elder care and other situations where companionship is one of the primary functions.

"These are contexts where you use robots with vulnerable populations, such as people who might have dementia," Scheutz says. "These people are even more likely to invest emotions and invest in the relationship, even though the robots cannot reciprocate."

French robotics company Aldebaran bills its Pepper robot as "the first humanoid robot designed to live with humans." Originally created for Japanese mobile phone company SoftBank Mobile as a device to greet and welcome shoppers in stores, Pepper can recognize certain human facial expressions and react accordingly. For example, if the robot senses someone is sad, it can play their favorite song to try to cheer them up, or play interactive games through a display tablet mounted on its chest. Japanese industrial automation company Aist makes an interactive robot that looks like a seal called Paro. Used in elder care settings, it has been shown to improve people's moods.

As robots like Pepper and Paro become increasingly sophisticated and interactive, however, there is a significant risk that they can cause more psychological harm than good, according to Scheutz. "People might interpret the robot's gestures or eye gaze in a particular way and form expectations based on that, and the robot will inevitably disappoint those expectations," he says. "People will be tricked into believing that the robot has capabilities and something inside it that it doesn't."

Even for people who don't suffer from any mental condition, there are risks involved in using robots for companionship, Scheutz says. "If you take people who are just lonely, and they project something into this robot that's not there, it's going to be very frustrating." 

Though social robots are not available yet for consumer purchase in the U.S., robotic machines like the Roomba vacuum have been around for years, so the technology is already making its way into U.S. households. Scheutz believes the earliest applications for other robots in homes are likely to be basic services like cleaning and delivery, before "companion" robots become a consumer item. "Eventually, we will notice that we have a whole bunch of robots in our homes that do different things, and maybe they'll talk to each other," he says. 

What Scheutz doesn't want to see is robotics companies throwing around the term "emotional robot" when referring to any product. It could create confusion he thinks might eventually lead to humans experiencing psychological harm.

"It's absurdly bogus," he says. "There's nothing in a robot that has anything to do with emotions." 

Published on: Feb 4, 2016