In an amiable spar, I was going back and forth with an artificial intelligence/VR thought influencer Mark Metry on Instagram through direct messages.

In his Instagram Story, he said he was grateful for "artificial intelligence."

I asked why.

"[It's the] New era of the human condition," he replied. "Where we can play, imagine and create without the mundane trivial tasks that take up people's lives."

"But if all 'jobs' are taken by AI, how will we earn a living?" I asked.

"By the time AI is fully conscious and replaces the human workload, it will come up with solutions to problems we can't even fathom."

For me, this was a bit abstract. If machines will solve problems and do most, if not all, of the work for us, what will happen to human careers?

"I guess I'm wondering what our day to day will be like," I asked. "Will we--or many of us--just not (have to) work one day?"

"Yep," he said. "Just like we view agriculture and hunting for food today."

He finished with what I took as a joke, "We'll be living in dream virtual reality worlds haha."

"That's interesting," I said. "Then it becomes an ethical question of if that is a 'better' world to live in or not... personally, I don't know."

Mark responded affirmatively with the red ink "100" emoji. 

Glimpses into the Future

Have you seen the apocalyptic movie Blade Runner 2049?

My wife and I disagreed sharply on whether that film truly depicted the future accurately or not.

Or Spielberg's recent release of Ready Player One?

In the film, people escape the decaying and depressing real world and find purpose and relationships -- the stuff that makes us happy -- in the virtual world of "the Oasis."

These movies make a guess at what the future is going to be like, similar to how Elon Musk tweets.

What A.I. experts say

Musk, one of the leading minds in this space and the CEO of Open AI, a "non-profit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole," is a persistent harbinger of the dangers of artificial intelligence. His tweets ring ever more ominous.

In contrast, the "Head of A.I. at Google slam[med] the kind of 'A.I. apocalypse' fear-mongering Elon Musk has been doing" according to a CNBC headline.

I attended the conference at which the aforementioned former senior vice president of engineering at Google, John Giannandrea, spoke. From the stage, he said that everyone needed to calm down. "I just object to the hype and the sort of sound bites that some people have been making," he said. "I am definitely not worried about the AI apocalypse."

Earlier this year, Giannandrea left his position at Google and took a seat on Tim Cook's bench of executives as Apple's head of "machine learning and A.I. strategy."

Kevin Kelly, founding executive editor at WIRED, believes the fearful notion of a "superhuman A.I." is an urban legend. A myth. And he's developed five reasons to defend his thesis. 

Still, it doesn't take much of an imagination to run ahead on the rapidly developing path we're on.

We're heading towards a world
Ready Player One."
Machines are replacing humans' most mundane, unsafe, and undesirable jobs,

If you're worried about robots taking your job, check out the website

What's an automated, virtual world look like?

The benefit of an automated, virtual world is, in theory, the freedom to do more of what we want and to live our most unrestrained lives while maintaining full safety. The more artificial intelligence in our day, the more free time we'll have and the more activities we'll be able to experience, many of which we can't experience now (i.e., walk through Middle Eastern ancient ruins). But soon, with a comfortable headset equipped with cinematic optics and immersive audiologics, Microsoft AI is already recreating dilapidated historical sites with its partner Iconem, a startup that specializes in the 3D digitization of endangered cultural heritage sites.

If humans shift to more and more digital experiences and therefore more controlled and monitored experiences (e.g., sitting in a VR chair in the comfort of your living room), then the world should soon become a safer and safer place, right?

The theory would say yes, but when thinking through vexingly complicated issues such as artificial intelligence and futurism, black and white answers tend to be elusive, and we're left with only speculation.

This is why an automated, virtual world is difficult to imagine with any degree of confidence, especially when it's been so heavily colored by drama-motivated Hollywood scriptwriters and fiction novelists. It all tends to end poorly: Obese sickos in basements. Dominating cybercorporations. Self-identity confusion. Unshakeable distrust of reality. Robot love. 

More experts weigh in...

But experts -- those working closest to the technology -- are bastions of optimism and hope. Eric Schmidt, executive chairman of Google and Alphabet, Inc., told TechCrunch, that the Elon Musk's foreboding view of artificial intelligence is "exactly wrong," reports Anthony Ha.

"He doesn't understand the benefits that this technology will provide to making every human being smarter," Schmidt said. "The fact of the matter is that AI and machine learning are so fundamentally good for humanity."

Sebastion Thrun, futurist and co-founder of Udacity, co-wrote with Schmidt in a Fortune Magazine column:

"We believe AI has the potential not only to free us from the negative, but to enhance what's most positive about us as human beings. In playing with AlphaGo, grandmaster Sedol gained a much deeper understanding of the game and has since dramatically improved his level of play. We could all be like Sedol, harnessing AI to improve the things we do every day.

Imagine a world where clever apps and devices could help us recognize every person we've ever met, recall anything we've ever said, and experience any moment we've ever missed. A world where we could in effect speak every language. (We already see glimmers of this today with Google Translate.)"

Sounds peachy, no?

The "right" way forward

Ethics + artificial intelligence is, and will continue to be, a hot topic. But in spite of the warnings or bad possibilities, the good seems to outweigh the concern. Man and woman's thirst for innovation and ease continues to make the world a better place, but at what cost?

Do we miss the agricultural lifestyle?

Do we long to return to the days when we hunted wild game in the woods for survival?

No, for the most part. We don't. Perhaps, we miss the idea of these simple pursuits, which is why they still exist as hobbies (gardening, hunting, etc.), but any reasonable, capitalistic farmer is leveraging pieces of machinery and digital technology to run his or her farm most efficiently. And God knows how we use machinery and robots to manufacture our meat at scale. 

Technology has always been a morally neutral thing until it's applied. It's neither good nor bad in itself, like an airplane, dollar bill, or laptop. It depends, one hundred percent, on how the user uses it.

For this reason, I believe philosophy and religion -- the studies of what is objectively right and wrong -- should never be divorced from technology decision-making, especially in this explosive age of artificial intelligence. The last word in the cliche adage "With great power comes great responsibility" doesn't work anymore. Responsibility assumes a knowledge of what needs to be done, hence its etymological root word "response." Responsibility means doing the right thing. But in an unprecedented world of rapidly developing artificial intelligence, what is the right thing to do? 

Fortunately, it's too early for you or me to need to know at this point. But it seems the prosperity of human existence will unfortunately and heavily depend on the moral compasses of the leaders of the world's biggest tech companies, who are faced with making and influencing these decisions every day.

American AI researcher and writer for the Machine Intelligence Research Institute, Eliezer Yudkowsky, sums up a diligent approach to thinking about this subject going forward:  "By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it."