In 1993 AT&T ran a widely circulated mass media campaign that consisted of TV and print advertising (and even the Web's first banner ad) in a series of commercials, called the "You Will" ads. The ads were intended to portray the distant future; the very, very distant future. They were one of the most accurate depictions of the future ever made in mass media. And rivaled even the most outlandish hollywood sci-fi productions.

In fact, according to Glen Kaiser, who ran the ad campaign for AT&T, "David Fincher, whose Hollywood directorial debut Alien 3 had recently earned an Oscar nomination for visual effects, was picked to direct the commercials."

As a backdrop video played showing a car with GPS, touch screens, tablet computing, electronic medical records, ebooks, web conferencing and on-demand video, actor Tom Selleck's deeply masculine voice-over would say "Have you ever traveled the country...without a map? You will. And the one who will bring it to you? AT&T." The ads were frighteningly prescient. In fact, if you view them today, it seems that AT&T had the crystal ball into the future that the rest of us could only wish for.

AT&T's YOU WILL ad campaign predicted with uncanny accuracy nearly all of the technologies we today surround ourselves with. So, how is it possible that they didn't bring a single one to market?

But here's the irony. Although AT&T predicted the technology of the future with uncanny accuracy, it turned out that it wasn't the company that brought a single one of these technologies to you.

AT&T was very good at predicting technology, most of which it had built in its Bell Labs, but they were terrible at predicting the future, and especially at timing it and capitalizing on it. How is that possible? Shouldn't being able to predict the future of technology be synonymous with predicting the future?

No, because of the one thing that is consistently nearly impossible to predict: the way that behavior will also change.  

I'm not picking on AT&T for any reason other than to depict how magnificently accurate their depiction of the technological future was. If anything, AT&T illustrated how easy it is to project technology's evolution. But it focused on technology rather than behavior. You could argue that with the eventual spin-off of Bell Labs in 1995, AT&T lost that innovation edge as well.

Or you could pin the blame on the state of the technology and say that it just wasn't ready yet, but if AT&T or any other company had tried to push those same "You Will" technologies out into the market at the time they would have been met with the same fate as Apple's Newton PDA, arguable the first tablet-based handheld computer, which, coincidentally, was also introduced in 1993!

You could also blame affordability. But that's not entirely true either. A Motorola StarTAC (dumb phone) cost $1000 in 1996. About the same as an iPhone X. But when you add on monthly usage costs and apps most of us spend much more for a smart phone today, even with an adjustment for inflation.

Sure, Moore's law had some catching up to do, but as important is the fact is that it took another two decades for behavior to start supporting the widespread use of these technologies and for companies such as Intel, Cisco, Apple, Garmin, and Amazon to capitalize on them.

The problem is that while technology is visible, tangible, mathematical, and predictable, behavior is influenced by far too many hidden and invisible variables. You can project technological progress because the variables are known. In fact, the last fifty years of technology really have not been all that hard to predict because they've followed the fairly reliable trajectory of Moore's Law, which has proven to be an amazingly accurate predictor of the power, storage capacities, and costs of computing over any period of time.

But what if, as some scientists claim, Moore's Law suddenly reached its physical limits and we could no longer pack more nano-scale transistors onto a silicon wafer, or more ones and zeros onto a flash drive? Would we, as a civilization, start to reach the limits of innovation and our own ability to solve the increasingly complex societal, economic, and ecological problems that face us?

No. Because whatever the case may be with the rate of technology's advance the next quantum leap in how we use computers to help us navigate today's challenges, and those we will face in the future, will be in their ability to use AI to understand behavior in ways that have never been possible, while intelligent machines simultaneously start to exhibit their own behavior.

That's nothing less than an innovation revolution. Up until very recently, we simply have not been able to capture enough data to understand behavior, and even where we have, our technology hasn't been adequate to make sense of it. AI is changing that by giving us the power to makes sense of unprecedented amounts of data. 44 Zetabytes a year (that's 21 zeros) by 2025. At that rate we run out of storage space using every atom in the solar system sometime in the next 100 years!

To call this a game changer for innovation is a severe understatement. The innovations of the future will be about understanding and predicting the incredibly complex behaviors of the natural and man-made systems that make up the world. The better we understand the patterns that influence the behaviors of people, devices, machines, and systems, the more likely we are to be able to predict how those behaviors will interact, evolve, and manifest themselves in the innovations of the future.

I know; it's a very different way to look at innovation and it may not be the way you want to think of it, after all, focusing on technology innovation is so much easier; but if you intend to thrive and succeed over the next two decades, well, how can I say this You Will.

Published on: May 24, 2018
Like this column? Sign up to subscribe to email alerts and you'll never miss a post.