Before his death, physicist Stephen Hawking saw things few others could -- the true nature of time, what's going on inside black holes, and much more. Could he also have seen the future of artificial intelligence?

Whether smart machines will be the saviors of humanity or its destruction is a hot topic among super smart technologists, from Elon Musk (not an optimist) to Bill Gates and Mark Zuckerberg (cautiously optimistic). Hawking, with his incredible intellectual gifts, had lots to say on the topic. But unlike some of these others, he focused as much on politics as programming.

Whether our tech-saturated future turns into a utopia or dystopia depends on how we treat each other, Hawking warned right before he died.

An age of plenty or an age of poverty?

While Hawking did voice concerns about some sort of real-world Robopocalypse, where our smart machines turn on us like an abused puppy grown into a vicious dog (Musk's nightmare scenario), his main worry about the future of AI seems to be how to handle incredible abundance, a future where humans are pretty much superfluous. If robots make everything we need, what will we do all day and how will most of us make money?

That, Hawking insisted in a recent Reddit AMA (Ask Me Anything) was the really worrisome question:

"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution."

In short, what will eventually stand in the way of a life of ease and self-actualization for the majority of the human race won't be technology, it will be politics and psychology. And while Hawking seemed boldly optimistic about our ability to build incredible things, he was far less sanguine about our ability to share the spoils of that innovation.

"So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality," he continued in the AMA.

This one online discussion was far from the only time he raised concerns. AI, he predicted a few years back at the opening of a new research center at Oxford, may allow us to "finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization." But, he went on, "alongside the benefits, AI will also bring dangers, like... new ways for the few to oppress the many."  

A warning for the rest of us.

What's perhaps most important about these warnings is that we all need to consider them. While experts' calls for a "kill switch" for robot intelligence are fascinating and no doubt deeply engaging to the tech savvy, the average Joe or Jane on the street isn't going to play a part in actually designing and employing that or any other tech fix for the dangers of AI.

But we all vote (or at least we all should), and we all take part in the public discussion of how to share wealth and support the vulnerable, or not. Every day we makes decisions about who is in "our tribe" and worthy of our help. While you will most likely never know much about the nitty gritty tech details of building AI, you should, as a citizen, probably think about how we share the spoils of these brilliant innovations. Taking a few minutes to consider Hawking's warnings is one great way to honor his legacy.