Artificial Intelligence (AI) is getting really smart. It's already able to recognize what's in the photos you share online, powers technology you use every day like Siri and Alexa, and does even more complex tasks like keeping your car in its lane or even driving on its own. But those tasks all require that a computer be essentially trained by a human, based on human knowledge and expertise. 

That's not enough for the researchers at OpenAI.

In fact, it just announced a partnership that will have Microsoft investing a cool $1 billion into the for-profit research lab -- originally founded by Elon Musk and others, and now run by Sam Altman -- as an attempt to pursue the "holy grail" of AI, known as artificial general intelligence (AGI).

Considered one of the ultimate computing challenges, AGI, in theory, is capable of exceeding human capacity for understanding topics and can master more individual areas than any human can. 

Musk left OpenAI earlier this year over disagreements about the future of the organization and the technology, as well as competition for the same research talent at his other companies, Tesla and SpaceX, according to a report by Bloomberg.

A high-powered deal.

Microsoft's Azure cloud-computing platform will host and power the new technology in an exclusive partnership. In fact, most of the $1 billion investment will be used on the raw computing power necessary to accomplish the machine learning behind AGI. Microsoft will also develop software and machine learning tools based on OpenAI's technology.

In fact, in a statement, OpenAI said: "We believe that the creation of beneficial AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity."

Those sound like pretty lofty goals, but the reality is that researchers are racing to master a computing power that could bring about substantial advancements not only in everyday technology but could independently chart out new discoveries like new treatments for currently incurable medical conditions. 

For example, the OpenAI researchers have made advancements in teaching a robot hand to manipulate objects similar to the way a human would. The implications are that eventually, robots could learn to conduct complicated medical procedures, even if a human surgeon isn't available. 

Real-life science fiction.

It could also destroy us. That's not hyperbole. It might sound like the stuff of science-fiction movies, but if AGI ever becomes a thing, we could very well be living in a world where we'll need a lot more than Arnold Schwarzenegger to protect us from the machines.  

That's because while there are enormous benefits to machines capable of solving the world's problems, there's also a greater potential for them to create even bigger problems that we can't imagine, never mind anticipate. The question is simple: are we comfortable with building something smart enough to recognize that maybe we shouldn't be in charge anymore? 

In fact, a New York Times article last year details a conversation Elon Musk had with fellow tech billionaire Mark Zuckerberg and warned that AI could be "potentially more dangerous than nukes," referring to the destruction capable with nuclear weapons.

A lesson in unintended consequences.

While all of this is interesting, and the amount of computing power required is staggering, there's actually a more important lesson here for all of us. The stuff you build has consequences. That might seem obvious since you are building something usually because it will have some intended effect on the lives of your customers. 

But there are often unintended consequences and it's worth considering what safeguards you'll build to manage them. Microsoft CEO Satya Nadella even made a point of saying that they were committed to keeping "AI safety front and center--so that everyone can benefit." 

Even if you're not developing the smartest computer in the world, the everyday technology we use has changed the way we work, interact, and communicate -- and not always for the better. In fact, I think you could argue that all of the technology we use to be more productive often has the opposite effect.

That doesn't mean you shouldn't pursue big ideas and work tirelessly to bring them to life. It does, however, mean that you should take into consideration what happens after you expose them to the world, and be intentional about how to build things that add real value to people's lives.

That seems like a pursuit worth investing in.