You'd think you'd be able to tell the difference between a real news article written by a human and a made-up one written by an algorithm.

You'd think. 

OpenAI, a technology nonprofit that Elon Musk co-founded, found itself in an ethical conundrum. The research company aims to build artificial intelligence tools that can be used for good. Usually, OpenAI then shares its research and code widely for anyone to use (hence the word "open" in its name).

Technology that's so good, OpenAI won't release it

OpenAI has been experimenting with an A.I. text generator it built called GPT-2. But it's not sharing GPT-2 with the public as it has done with previous projects.

Because of fear of misuse, OpenAI is locking it down for further research. The public will not be allowed to access the code.

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," the company announced last week.

The artificial intelligence powers of GPT-2

GPT-2 "studies" a single line of text to learn the patterns of human language. It can then generate full paragraphs of text and mimic the writing style. It can even write full articles.

OpenAI quickly discovered a big problem with GPT-2. The algorithm-generated texts were good. So good that you couldn't tell a robot wrote them. It began to generate paragraphs of text that were eerily too human.

Fake news that looks (and reads) shockingly too real

The Guardian's Alex Hern fed GPT-2 a few sentences about Brexit. It spit out a full-length artificial article that even generated fake quotes using real names.

Hern points out that other text generators have obvious "tells" that signal their texts were not written by humans. GPT-2 shares none of those quirks. "When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject."

The fake news implications are obvious. This is the main reason OpenAI decided not to release the code. It fears it's too dangerous.

Malicious users could outsource writing any misinformation under the sun to GPT-2. All they'd need is a sample line of text to get started to generate a plausible-sounding article, complete with sources and quotations that sound legit. And it's easy to make any website "look" official online, given the democratization of web design. 

Elon Musk wants you to know he's not to blame.

When OpenAI announced it wouldn't be releasing GPT2, Musk tweeted to clarify his relationship with the company.

Even though he was one of its co-founders, Musk said he parted ways with with the non-profit last year. Musk said he needed to focus on Tesla and SpaceX projects, didn't agree with everything OpenAI wanted to do, and the two organizations were competing for talent.