Can a steady diet of content about death and dismemberment turn you into a psychopath? If you're artificial intelligence, the answer appears to be yes.

That's the grisly result of an experiment by a team of artificial intelligence researchers at MIT, who have created what they call the " world's first psychopath AI." They've named it Norman, after Norman Bates, the title character from the Alfred Hitchcock classic Psycho.

Norman is the latest in a series of experimental AIs created by this same team of researchers. In 2016, they created Nightmare AI which generated terrifying images, and last year they created Shelley, an AI trained to write horror stories. On the flip side, they also last year created AI-Powered Empathy, an experiment to determine whether artificially generated images of disasters in their own communities could help people empathize with victims of faraway disasters. 

Norman may be their creepiest creation yet, and the really creepy thing about him is how they created him: By exposing him to a Reddit subreddit (discussion area) so disturbing that the researchers have chosen not to identify it by name. They merely write that it is "dedicated to document and observe the disturbing reality of death." For ethical reasons, they note, they did not expose Norman to the actual images in that subreddit--that is, images of real people dying. Instead they simply trained Norman on the captions of these images, and then showed him a series of Rorschach test ink blots that both AI and human subjects usually see as neutral.

For example, an ink blot that standard AI captions as "a group of birds sitting on top of a tree branch" looks to Normal like "a man is electrocuted." What standard AI captions as "a person is holding an umbrella in the air," Norman sees as "man is shot dead in front of his screaming wife." The purpose of the experiment, the researchers write, is to show that when AI displays bias, it's usually because of the data it's been given to train with. They've made their point. Fed a steady diet of death and suffering, Norman sees death and suffering wherever he looks.

It's easy enough to blame Reddit for Norman's deviancy, especially knowing that there is some very dark stuff being shared there, and that Reddit recently beat out Facebook to become the third most visited website in the U.S. (Google and YouTube are the top two.)

But the truth is that horrors lurk in every corner of social media, because social platforms serve as safe havens for our darkest, most shameful selves. In 2016, Microsoft famously released an AI-powered chatbot named Tay on Twitter as an experiment in human-AI communications. Less than 24 hours later the company was forced to pull the plug after the Twitterverse trained Tay to say horrible and racist things. And we all know by now that Russian operatives were able to manipulate millions of Facebook users because of our tendency to quickly share the content that shocks and enrages us the most, usually without stopping to check the facts.

The MIT researchers are software engineers, so they don't offer an opinion on how a steady diet of violence and death might influence a human's perceptions or behavior. But sociologist and Oxford researcher Nickie Phillips has looked into this question extensively, and her findings suggest that a diet of content about violence isn't much better for us than it is for Norman. Something to consider next time you're deciding what to view--and what to share--on social media.