Computers and the machines they run began replacing humans decades or even centuries ago. Factories have been transformed, travel has been reimagined, even our homes are completely changed by technology every few years. But behind all of these machines is software; software designed by humans.

Or is there? Will Iverson, the chief technology officer at Dev9, a continuous delivery software firm based in the Seattle area, says that software is increasingly being leveraged to design other software, in many cases replacing the role of humans. In other words, software is beginning to write itself.

What does that look like? "We leverage large numbers of servers," says Iverson. "Our small teams use these servers like software robots that run huge suites of automated tests that check for security issues, roll out new software in stages all over the world, and perform tasks with far more precision than humans can."

How sci-fi is Dev9? I sat down with Iverson to discuss what the future of software development looks like and to find out exactly how long it will be before we are all slaves to a robot empire.

What will software programming look like in five years?

Iverson: There will still be people working as developers, though they may be wearing virtual reality headsets instead of staring at computer monitors.

The apps they design will be running on different kinds of devices, and more and more of the processing work will be done on servers. You may have a computer as powerful as your current phone in your glasses or on your wrist, but that device will be leveraging 10 or 100 times the server power, which will make it seem a lot smarter and more powerful.

This power is also what makes automation critical--the thousands of servers that devices are relying on will require immensely intelligent software to constantly manage and update them. Humans simply can't do that to scale.

Can everything about software development be automated, or will a personal touch always be important?

Software development has flirted with things like visual design tools and rapid application development tools for a long time. Even now, I'd say the bulk of software development boils down to an engineer sitting in front of a computer, translating ideas and processes into plain text files. As long as computers need to have things explained to them, you'll need some kind of software engineer.

In artificial intelligence circles, they talk about "weak A.I." and "strong A.I." Weak A.I. is a task-specific A.I. For example, modern elevators have software that learns where people tend to hang out at different times of day. Or you can teach an email server what junk email looks like.

A strong A.I. is basically an A.I. that is as smart and capable as an adult human being. Right now, the strategy seems to be to combine weak A.I. systems in pieces until they start to look and feel like strong A.I. Apple's Siri, Google's Now, and Microsoft's Cortana are all following this strategy.

Do programmers need to be worried about being replaced in the near future?

Eventually, yes. But by that point, society will be very used to dealing with that kind of societal change--the millions of paid drivers replaced by self-driving cars will have long since forced our political and economic systems to figure out how to deal with these transitions.

We have joked around the office that software development will be one of the last professions left.

Is there going to be an equally large industry for programming robots?

There already is--it's just a matter of definition. If a server is an information robot, it's already there. If you mean a physical robot, that's a big industry today. Throw in the internet of things and you have a huge, huge industry just getting started.

How long until HAL 9000 becomes real?

A lot of folks would say that by 2045, given Moore's Law and exponential growth, you'll have strong A.I. deployed widely enough to count as a social singularity. My guess is that you will have conversations in English with a computer, like in 2001: A Space Odyssey, quite a bit sooner than that.

Incidentally, in 2010 they discovered that the reason HAL 9000 went crazy was because the human software engineers gave it conflicting orders. Maybe the software engineers of 2050 will be philosophers, not technicians.