In his book Humans Are Underrated (Portfolio, 2015), Geoff Colvin explores what the future of artificial intelligence means for humanity. In the following edited excerpt, Colvin discusses jobs that can never be done by computers. 

In finding our value as technology advances, looking at ourselves is much more useful than the conventional approach, which is to ask what kind of work a computer will never be able to do. While it seems like common sense that the skills computers can't acquire will be valuable, the lesson of history is that it's dangerous to claim there are any skills that computers cannot eventually acquire.

The trail of embarrassing predictions goes way back. Early researchers in computer translation of languages were highly pessimistic that the field could ever progress beyond its nearly useless state as of the mid-1960s; now Google translates written language for free, and Skype translates spoken language in real time, for free.

Hubert Dreyfus of MIT, in a 1972 book called What Computers Can't Do, saw little hope that computers could make significant further progress in playing chess beyond the mediocre level then achieved; but a computer beat the world champion, Garry Kasparov, in 1997.

Economists Frank Levy and Richard J. Murnane, in an excellent 2004 book called The New Division of Labor, explain how driving a vehi­cle involves such a mass of sensory inputs and requires such com­plex split-second judgments that it would be extremely difficult for a computer ever to handle the job; yet Google introduced its autono­mous car six years later.

Steven Pinker observed in 2007 that “assessing the layout of the world and guiding a body through it are staggeringly complex engineering tasks, as we see by the absence of dishwashers that can empty themselves or vacuum cleaners that can climb stairs.” Yet iRobot soon thereafter was making vacuum cleaners and floor scrubbers that find their way around the house without harming furniture, pets, or children, and was also making other robots that climb stairs; it could obviously make machines that do both if it believed demand was sufficient.

The self-emptying dishwasher is likewise just a question of when advancing technology and market demand might intersect.

The pattern is clear. Extremely smart people note the overwhelming complexity of various tasks--including some, like driving a car, that people handle almost effortlessly--and conclude that computers will find mastering them terribly tough. Yet over and over it’s just a matter of time, often less time than anyone expects.

We just can’t get our heads around the power of doubling every two years. At that rate, computing power increases by a factor of a million in forty years.

The computing visionary Bill Joy likes to point out that jet travel is faster than walking by a factor of one hundred, and it changed the world. Nothing in our experience prepares us to grasp a factor of a million. At the same time, increasingly sophisticated algorithms let computers handle complex tasks using less computing power.

So year after year, we reliably commit the same blunder of underestimating what computers will do.

A Better Strategy

We should know by now that figuring out what computers will never do is an exceedingly perilous route to determining how humans can remain valuable. We’ll venture down that road just a little ways, cautiously and conservatively.

But a better strategy is to ask: What are the activities that we humans, driven by our deepest nature or by the realities of daily life, will simply insist be performed by other humans, regardless of what computers can do?

This strategy requires us to make two important assumptions. They sound a little strange, or maybe obvious, but they must be said explicitly: 

  • We assume that humans are in charge. The economy­--the world--will continue to be run ultimately by and for humans. People start humming the Twilight Zone theme music if you mention this, and some may recall that a war between machines and humans is the basic conflict in the Terminator movies. And yet, in 2014, when I asked Dominic Barton, global managing director of the McKinsey consulting firm, about the effect of computers on business managers, he replied, "I think there still is a very important role obviously for leaders. We're not going to be run by machines." Obviously. Yet he felt he had to say it. We'll assume he's right.
  • We assume that a perfect mechanical imitation of a human being does not exist in our or our grandchildren's lifetimes. The indistinguishable cyborg was another theme of the Terminator movies. And really, who knows? But we're not going to worry about it. If that's a mis­take, then the issues we'll face are unimaginable now.

On that basis, what activities will we continue to insist be done by humans? A large category of them comprises roles for which we demand that a specific person or persons be accountable. A useful example is making decisions in courts of law, for which we will require human judges for quite a long time to come. It's an example in which the human-versus-computer question is not hypothetical.

Judges make parole decisions in some countries, such as Israel, where researchers investigated how those decisions are influenced by the critical human issue of lunch.

Over the course of a day, the judges approve about 35 percent of prisoners’ applications for parole. But the approval rate declines steadily in the two hours before lunch, almost to zero just before the lunch break.

Immediately after lunch, it spikes to 65 percent and then again declines steadily. If you’re a prisoner, the number of years you spend in prison could be affected significantly by whether your parole application happens to be the last one on the judge’s stack before lunch or the first one after.

In light of the findings on predicting recidivism, it’s virtually certain that computer analysis could judge parole applications more effectively, and certainly less capriciously, than human judges do.

Yet how would you rate the chances of that job getting reassigned from judges to machines? It isn’t a matter of computer abilities; it’s a matter of the social necessity that individuals be accountable for important decisions.

Similarly, it seems a safe bet that those in other accountability roles--CEOs, generals, government leaders at every levels--will remain in those roles for the same reason.

In addition, there are problems that humans, rather than computers, will have to solve for purely practical reasons. It isn’t because computers couldn’t eventually solve them.

It’s because in real life, and especially in organizational life, we keep changing our conception of what the problem is and what our goals are.

These are issues that people must work out for themselves, and, critically, they must do it in groups, partly because organizations include many constituencies that must be represented in problem solving, and partly because groups can solve problems far better than any individual can.

The evidence is clear (and we’ll see plenty of it) that the most effective groups are those whose members possess most strongly the basic, deeply human skills.

Published on: Sep 8, 2015