I watched the movie Black Panther this past week, and it was the first time in a movie when I had a hard time deciphering what was real and not real.

The ships looked ultra-realistic, the costumes so convincing and utterly detailed that you almost think you could buy one at Nordstrom on sale this weekend. (In truth, some of them were 100% real, but some were augmented digitally.) One of the most digitally detailed and colorful movies ever made, it's also a warning about the future.

What should we fear? Not knowing when something is actually just a robot.

This was not the case just a year or two ago. Artificial intelligence, machine learning, and digital technology are advancing quickly. If you take movies as an example, the digital effects in something like the first Iron Man movie didn't quite look convincing. Your brain kept telling you the images were not real, something experts call the  uncanny valley.

And then there's a bot like  Amazon Alexa.

Her cadence when she talks, her ability to answer questions, and the fact that the assistant is just a voice in a room are all amazing in terms of innovations, but listen closely next time you ask Alexa a question. She sounds human. And, you might not even care whether she actually is human or not. You just want the weather report.

That's the real problem. Not caring.

I remember talking to a military expert quite a few years ago. He was saying how robots will never serve in the military because it would be too easy to deploy them and send them into a battleground without knowing how they'd respond. Some might make decisions using machine learning that lead to a genocide. Some might choose self-protection.

What that expert didn't realize at the time is that empathy is something we can program into bots. One example of this is that, if you tell Alexa you are depressed, the bot will give you some advice or even suggest you call a suicide prevention center. Amazon execs say they have programmed Alexa to listen for trigger words like depressed or suicidal.

It follows that bots can be programmed to do much more than that--to act human, to show empathy, and to convince us that they're real. While a company like Soul Machines has shown that a digital human can exist (for now, to serve as a bank teller), it's possible that you would drive up to the window at McDonald's someday and talk to a bot who takes your order and never realize that the "person" handling the order is not actually a person.

This is all a wonderful enabling tech advancement, except that there is danger.

When bots pretend to be humans, it raises questions about how we will respond. I loved the movie Black Panther, but I also came to realize that half of the images on the screen were not real. Bots can model empathy, digital scenes can look convincing, digital actors can die--that's been true for a while. But how do we change? It's the same argument you might make about using a phone. It is changing what it means to be a human. Right now, a human using a phone constantly has a sore back and bleary eyes.

In the future, at least by 2030, we might not ever know which service workers are bots, which online avatars are not real, which cars zipping past are controlled by a human. And that's when we might cross another uncanny valley. We might not care.

A digital representation of a human might have the same standing as a human, and we might suddenly need to start showing those bots more empathy and understanding than we do now. But should we? And, if we don't, does that mean we will suddenly become jerks who don't care about fake humans?

That's the one thing we should fear about bots. Not the military kind that might shoot us. Not the ones that drive our cars someday. We should fear how we react to them, how we change when we don't even know if a bot is behind the window at McDonald's or not.

Published on: Feb 23, 2018