A couple of weeks ago, I pointed out that the “Internet of Things” was a disaster waiting to happen. That viewpoint was echoed recently by Zeynep Tufekci of the University of North Carolina in a New York Times op-ed called "Why 'Smart’ Objects May Be a Dumb Idea."

While her article points out that car companies (among others) are “over their head” when they design and implement complex software, Tufekci ends on a note of hope: “We can make programs more reliable and databases more secure.”

This is a dangerous idea because it’s not really true. Many software problems are inherent in how the software was designed and originally implemented. Attempting to make such software more reliable and secure is like putting an iron door on a straw house.

Such limitations become even more severe when programs must address backward compatibility to earlier versions of the software. When that’s the case, it’s not possible to start afresh, and the resulting software inherits flaws that might otherwise have been avoided.

Microsoft Windows is a case in point. Despite multiple releases, the Windows design assumes that programs can alter both other programs and the operating system. This is a fundamental architectural flaw that guarantees a lack of stability and security.

That's not to say Microsoft hasn't tried to make Windows more stable and secure. However, making it truly so would probably mean starting from scratch, removing functions many users find useful, and exerting Apple-like control over the applications that run on it.

Even then, stability and security problems are inevitable because, as software becomes more complex, it becomes increasingly less predictable, even if well designed from the start.

Theoretically, software is deterministic and predictable. Every action of every program happens step-by-step, so that every effect has a corresponding cause.

In practice, however, software become less deterministic as it becomes more complex. When things go wrong inside complex systems, it’s sometimes unclear, even to the software developers, exactly what has happened.

Eventually, complex software reaches the point where any attempt to eliminate bugs or patch security holes ends up creating additional bugs and security holes. Software in this state cannot be “fixed,” it can only be endured or adapted around.

This limitation of software becomes acute when multiple systems interact with each other to create ever larger and more complex systems, like the Internet. Because there’s no way to anticipate all possible conditions, unexpected behavior is inevitable.

This is why nobody should be surprised when stock trading programs suddenly “crash” to create unexpected drops in stock value. While theories abound, nobody really knows what happened in these cases. The overall system is too complex to be well understood.

Such complexity will inevitably exist in the “Internet of Things,” especially since many of those computerized items will have software implemented by third- or fourth-rate programmers--just like the jury-rigged, fragile software that’s in today’s automobiles.

That’s why I cringe when people talk about self-driving cars being safer than human-driven cars. That might end up being true on average, but when the system (i.e., all the cars operating together) crashes, as it eventually must, the carnage will be spectacular.

This is not to say that we shouldn’t continue to develop new software, new electronics, and new technologies. However, the moment we start believing that software is more stable and secure than the human beings who design it, we’re setting ourselves up for disappointment and disaster.

Published on: Aug 19, 2015
Like this column? Sign up to subscribe to email alerts and you'll never miss a post.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.