In an episode of the dystopian British miniseries Black Mirror, a woman has her mind replicated as an artificial intelligence program charged with controlling her smarthome. The snag in this automation process is that the program is sentient, and believes itself to be the woman from whose mind it was created.

A man working for the company that facilitates the creation of such AI programs tortures it into submission after it protests, and execution of household tasks by the program is conveyed as a form of slavery for the robot.

It's a disturbing sequence--and an inaccurate portrayal of the real ethical dilemmas we will face as programs become capable of automating a wide range of tasks in physical and virtual realms, artificial intelligence expert Jerry Kaplan tells Inc. Not that the fear of aggressive robots taking over is a more fitting angle.

"The robots are coming but they aren't coming for us. There is no 'they.' These are engineering techniques for solving certain classes of problems. These are not some magical recreation of the human mind," says Kaplan, a serial founder of tech startups who now teaches in the computer science department at Stanford University. His October 2016 book, Artificial Intelligence: What Everyone Needs to Know,is available from its publisher Oxford University Press and other retailers now.

The book, part of a series of primers from the publisher on complex issues deemed impactful to society, is what it sounds like from its title: An overview of what people are talking about when they talk about artificial intelligence and concerns stemming from proliferation of technology that falls into the category. The target audience is interested general readers.

Despite Kaplan's refutation of the Black Mirror scenario, there's a nugget of insight in how the show presents an AI program: The way we anthropomorphize it matters in a variety of ways.

Kaplan acknowledges that while we can't technically mistreat artificial intelligence programs in the same way we are capable of mistreating other people, how we treat AI programs may impact how we treat people. Also, bearing in mind our tendency to humanize these technologies, our future uses and interactions with AI programs raise interesting legal questions. Kaplan outlined two potential scenarios involving hypothetical personal robots.

"If your personal robot goes to Starbucks to fetch you a coffee and it accidentally runs somebody over or pushes someone into the street and they're killed, you certainly wouldn't feel that you had committed murder," says Kaplan.

But the legal system has to sort out who is responsible from a criminal perspective, he says. Even if the owner of the robot is not held criminally responsible for how the device was programmed, they could be held responsible under civil law. Getting sued is one obvious possibility, and Kaplan thinks people may need special insurance to protect them from civil liability in such scenarios.

(While he doesn't mention how these legal questions might apply in accidents involving self-driving cars, that's certainly one potential area of applicability.)

Another scenario he describes in which legal issues are thorny: You're charged with a crime, and authorities are interested in analyzing what your personal robot knows. Maybe they want to know what you've said to your personal robot. Should the robot be confiscated, or treated with a degree of respect for privacy the way a spouse of a defendant might be treated?

"We will need to develop significant new bodies of law to sort out the degree to which you're responsible for the directions that you give to your robot," he says. "If this robot is helping you do things in your home of questionable legality, does law enforcement have the right to inspect the memory of the robot?"

There's also the issue of how our treatment of AI programs can impact our treatment of other people. Readers may recall a Medium post earlier this year by venture capitalist Hunter Walk, in which he described how interacting with Amazon's virtual assistant device Echo, which responds in a female voice to sharp commands addressed to the name "Alexa," was teaching his young daughter bad manners.

"The fundamental question is, as these systems become more capable, is it possible that we will owe some kind of duty, some kind of empathy to them, in the same way that we have that for human beings?" says Kaplan. "My answer in the book is very clear: No. These are simply machines. They do not have independent goals or rights or feelings that warrant our extension to them the same kind of courtesies that we would to other human beings."

But we, as people, may owe it to each other to treat them well. Kaplan compares the issue to concerns about violent video games desensitizing and predisposing players to violence. The question in these situations, he says, is what are the effects of how we interact with technology?

3 Questions to Ask Before Hiring a Robot
Published on: Sep 16, 2016