Super human artificial intelligence is coming, says Sam Harris, a Stanford grad with his PhD in neuroscience from UCLA. He has five New York Times bestsellers under his belt. "It's very difficult to see how they won't destroy us or inspire us to destroy ourselves," he says.
How should we be preparing? Sam says our current emotional response--that it's cool--is woefully lacking.
"If you're anything like me, you'll find that it's fun to think about these things. And that response is part of the problem, OK? That response should worry you," he says. These comments are drawn from his TED Talk on artificial intelligence. "We seem unable to marshal an appropriate emotional response to the dangers that lie ahead."
The inevitability of superhuman artificial intelligence (A.I.)
If we're not interrupted by world wars, collisions with asteroids, or some unpreventable disaster, Sam believes it's a given we'll create superhuman artificial intelligence. His logical progression goes like this:
1. We like smart things.
2. It's helpful to have smarter things and they make our life easier. (Until they don't.)
3. Thus, as long as we have the capacity to make stuff smarter, we will.
At some point, our smarter software intelligences will be of the scale that they can create smarter software, and then--game over. The urge to create yet smarter digital intelligence becomes self-replicating and human brains become biological backwash.
Superhuman artificial intelligence in the near future
In projecting the logical future of superhuman A.I., Sam points out there are two major paths.
1. Separate evolution. He likens this path to the ant-human relationship. You don't hate ants. You may even step over them most of the time. They can hurt you--but barely. If they get in your way, like move into your house, you annihilate them. A.I. could treat us like that.
2. Co-evolution. With brain implants (neuroprosthetics) from ambitious companies like Kernel, it's possible we could plug the superhuman smarts right into our own wetware. "Now, this may in fact be the safest and only prudent path forward, but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head," he points out. (Personally, this is where I think we need to head, pardon the pun.)
Artificial intelligence, you and me, and the spectrum of smart
To help us plug into just how fast A.I. will become smarter than we are, Sam suggests you imagine intelligence on a curve. There's you and I. There are chickens, a few steps back. How much further forward does the intelligence spectrum go? We just don't know. And we may not be the right biology to find out.
He's sure artificial intelligence, for the sheer optimization of it, will be exploring the upper bounds of that curve with all possible processing speed. "We don't stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable," he says.
Sam's talk doesn't consider the possibility that we're already living in a type of simulation, one of the thought exercises Elon Musk entertains. (Elon fears unconscious A.I. development too, and launched OpenAI to address many of the issues Sam discusses.)
Sam doesn't go for flights of fancy that perhaps A.I.-human combinations are the next big turn in our evolution, or suggest we are evolving into cyborgs like University of Israel expert Yuval Noah Harari.
He doesn't suggest there's a way to stop superhuman artificial intelligence from happening, either. That it will is a foregone conclusion. But how it happens, and how we prepare for it, and our readiness to react are all processes he'd like us to think more about--urgently. "Stuart Russell has a nice analogy here," he shares. "He said, imagine that we received a message from an alien civilization, which read: 'People of Earth, we will arrive on your planet in 50 years. Get ready.' And now, we're just counting down the months until the mothership lands? We would feel a little more urgency than we do."