Early today, Slate pointed out that breakthrough technologies always seem to be "five to 10 years away," citing numerous tech forecasts (energy sources, transportation, medical/ body-related technologies, etc.) containing that exact phrase.

They also included some quotes predicting breakthroughs in "Robots/AI" in "five to 10 years," but the earliest was from 2006 and the rest were from the past two years. The lack of older quotes is probably because with AI, the big breakthrough--the "singularity" that approximates human intelligence--has a fuzzier threshold.

Here are some highlights in the history of AI predictions:

  • 1950: Alan Turing predicts a computer will emulate human intelligence (it will be impossible to tell whether you're texting with a human or a computer) "by the end of the century."
  • 1970: Life Magazine quotes several distinguished computer scientists saying "we will have a machine with the general intelligence of a human being" within three to fifteen years.
  • 1983: The bestseller The Fifth Generation predicts Japan will create intelligent machines within ten years.
  • 2002: MIT scientist Rodney Brooks predicts machines will have "emotions, desires, fears, loves, and pride" in 20 years.

Similarly, the futurist Ray Kurzweil has been predicting that the "singularity" will happen in 20 years for at least two decades. His current forecast is that it will happen by 2029. Or maybe 2045. (Apparently he made both predictions at the same conference.)

Meanwhile, we've got Elon Musk and Vladmir Putin warning about AI Armageddon and invasions of killer robots, and yet, have you noticed that when it comes to actual achievements in AI, there seems to be far more hype than substance?

Perhaps this is because AI--as it exists today--is very old technology. The three techniques for implementing AI used today--rule-based machine learning, neural networks, and pattern recognition--were invented decades ago.

While those techniques have been refined and big data added as a way to increase accuracy (as in predicting the next word you'll type), the results aren't particularly spectacular, because there have really been no breakthroughs.

For example, voice recognition is marginally more accurate than 20 years ago in identifying individual spoken words but still lacks any sense of context, which is why, when you're dictating, inappropriate words always intrude. It's also why the voice recognition inside voicemail systems is still limited to letters, numbers, and a few simple words.

Apple's SIRI is another example. While it's cleverly programmed to seem to be interacting, it's easily fooled and often inaccurate, as evidenced by the wealth of SIRI "fail" videos on YouTube.

Another area where AI is supposed to have made big advances is in strategy games. For years, humans consistently beat computers in the Chinese game of GO. No longer. And computers have long been able to defeat human chess champions. That certainly seems intelligent, right?

Well, here's a little thought experiment. Let's add a piece to the chess board that combines the moves of a knight and queen. While a human chess champion might complain about the change, the human would immediately adjust its game. A computer program--even Big Blue--would need to be reprogrammed. By a human.

Fifteen years ago, in an article in Red Herring, I quoted Rodney Brooks, then of MIT, that "a computer can't scan a tabletop and recognize that an unfamiliar object is actually a TV remote control" -- a feat that a 2-year-old human can easily manage. That's still true today.

Self-driving cars are also frequently cited as a (potentially job-killing) triumph of AI. However, the technologies they use--object avoidance, pattern recognition, various forms of radar, etc.--are again decades old.

What's more, even the most ambitious production implementations of self-driving cars are likely to be limited to freeway driving, the most repetitive and predictable of all driving situations. (While it's possible self-driving cars may eventually cause fewer accidents than human drivers, that's because human drivers are so awful.)

The same thing is true of facial recognition. The facial recognition in Apple's iPhone X is being touted in the press as a huge breakthrough; in fact, the basic technology has been around for decades; what's new is miniaturizing it so it will fit on a phone.

But what about all those algorithms we keep hearing about? Aren't those AI? Well, not really. The dictionary definition of algorithm is "a process or set of rules to be followed in calculations or other problem-solving operations."

In other words, an algorithm is just a fancy name for the logic inside a computer program. It's just a reflection of the intent of the programmer. Despite all the Sturm und Drang brouhaha about computers replacing humans, there's not the slightest indication that any computer program has created, or ever will create, something original.

IBM's Watson supercomputer is a case in point. Originally touted as an AI implementation that was superior to human doctors in diagnosing cancer and prescribing treatment, it's since become clear that it does nothing of the kind. As STAT recently pointed out:

"Three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn't living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer."

What's more, some of Watson's capabilities are of the "pay no attention to the man behind the curtain" variety. Again from STAT:

"At its heart, Watson for Oncology uses the cloud-based supercomputer to digest massive amounts of data -- from doctor's notes to medical studies to clinical guidelines. But its treatment recommendations are not based on its own insights from these data. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information about how patients with specific characteristics should be treated."

Watson, like everything else under the AI rubric, doesn't live up to the hype. But maybe that's because the point of AI isn't about the breakthroughs. It's about the hype.

Every ten years or so, pundits dust off the AI buzzword and try to convince the public there's something new and worthy of attention in the current implementation of these well-established technologies.

Marketers start attaching the buzzword to their projects to give them a patina of holier-than-thou tech. Indeed, I did so myself in the mid 1980s by positioning an automated text processing system I had built as AI because it used "rule-based programming." Nobody objected. Quite the contrary; my paper on the subject was published by the Association for Computing Machinery (ACM).

The periodic return of the AI buzzword is always accompanied by bold predictions (like Musk's killer robots and Kurzweil's singularity) that never quite come to pass. Machines that can think continue to remain "20 years in the future." Meanwhile, all we get is SIRI and a fancier version of cruise control.

And a boatload of overwrought handwringing.

Published on: Sep 18, 2017
Like this column? Sign up to subscribe to email alerts and you'll never miss a post.