Yeah, yeah, I know. Self-driving cars are just around the corner. Any day now. They're being tested everywhere. They're going to revolutionize transportation. Put thousands of Uber drivers and teamsters out of work. Don't hold your breath.

Despite conventional wisdom, AI programmers haven't been able to solve basic problems, like identifying pedestrians, or differentiating between dogs and children. AI programmers have totally failed to implement programs that exhibit anything resembling common sense, which is exactly what's needed to drive in a world full of humans.

According to a recent article on NPR, in California (the only state that requires the reporting of automobile deaths from autonomous vehicles) there have been three deaths in about 10 and 15 million miles of autonomous driving, That compares VERY unfavorably to conventional driving, where it would typically take 260 million miles to result in three deaths.

According to the Guardian, a whistleblower at Uber recently revealed that Uber's self-driving program results in an accident every 15,000 miles. By comparison, the average human gets in 3 to 4 accidents over 65 years while driving an average of 13,474 miles a year, for roughly one accident every 250,000 miles. That's a pretty big delta for a technology that's supposedly right around the corner:

Self-driving cars are particularly hazardous to pedestrians, according to NPR, because their ability to recognize pedestrians somewhat more than 90 percent of the time. Humans, by contrast, are incredibly good at spotting other humans, with a success rate probably around 99.99 percent. Even AI proponents at Carnegie Mellon admit that a five year old child can out perform AI when it comes to common sense decisions. As NPR explains:

"[autonomous vehicles] can't figure out what a pedestrian is or [what] a pedestrian is going to do. They can't separate a child from a dog. Sometimes a tree branch overhanging the road will be taken as something in the way."

Such limitations have huge consequences, as when an autonomous vehicle killed a pedestrian because it couldn't perceive that she was walking a bicycle. Similarly, simply slapping some stickers on a stop sign--an action that wouldn't fool a toddler--can confuse a self-driving car.

The limitation isn't just identification; it's also the ability understanding and predicting human behavior. According to recent research from the University of Colorado and the University of Technology in Syndey, to be safely operated around humans autonomous vehicles would need to make "instantaneous, unconscious judgments about the likely actions of people."

And that's far, far beyond the capability of any AI program, because it literally requires human intelligence.

Thus, according to the Guardian, so-called "self-driving" cars will always need a human being present to "take the wheel" when the AI program fails. It should seem obvious, though, that any automobile that requires a human "minder" isn't really self-driving; it's just doing cruise control on steroids.

So, while cars will be able to parallel park on their own, and function reasonably well in environments, like freeways, where human behavior is well-delineated, it seems highly unlikely, despite all the rosy hype, that fully autonomous cars are in our near future. Barring the emergence of the "singularity" (which seems unlikely), self-driving cars will remain an oxymoron.

But, but... what about all the breakthroughs we've been seeing in AI?

Not ready for prime time, I'm afraid. While AI programmers have successfully improved their programs' ability to play games with bounded, well defined rules, they've been stumped when comes to operating inside environments (like businesses) where the rules are flexible and unbounded.

Whenever you push deep past the hype on AI, you find more hype than actual achievement. IBM's Watson, for example, was widely touted as a more accurate way to diagnose cancer. In reality, it's been a dismal failure. Similarly, social media companies promised that self-learning AI would filter out illegal content; in fact, companies like Facebook rely upon a small army of humans to perform this function. The simple truth: AI failures abound.

This is not to say that AI--as currently implemented--can't be useful. Facial recognition, for example, is good enough to be useful to law enforcement. AI programs can play games (which have bounded rules) much better than humans. AI is excellent at looking for patterns in huge data sets. But none of those functions require common sense, which is required for a fully autonomous vehicle.

I fully expect to get plenty of pushback on this column because I've been making similar observations about AI literally for decades and I always get exact same pushback. Every freakin' time. I've come to the conclusion that arguing with AI true believers is like arguing with fundamentalists about the end of the world which )like the long-awaited "singularity") never seems to actually arrive. 

Published on: Dec 14, 2018
Like this column? Sign up to subscribe to email alerts and you'll never miss a post.