You can learn a lot from a psychopath, and you don't even have to binge-watch Netflix's "Mindhunter" to do it. You just need to study Norman, the artificial intelligence psychopath aptly named for the pivotal figure in Alfred Hitchcock's "Psycho."

The eerie, disconcerting brainchild of MIT researchers, Norman exemplifies for some what many business leaders fear: Robot revolutionaries are coming, and they might not all be devoted to serving humanity in innocent ways. Truly, Norman is the worst of the worst. His is a cautionary tale, but not the one you might think.

Norman wasn't created as evidence that nightmares about robot takeovers will come true. After all, the same inquisitive MIT pioneers have created AI monsters before, and none has yet to take over the world. Instead, the story of Norman can teach you a critical lesson about implementing machine learning.

Data bias is a real problem.

Norman's extreme diet of dark Reddit feeds, images, and ideas proves the importance of understanding data bias when using AI in the corporate world. The algorithm behind the Norman "personality" built itself based on the information it was fed. Is it any wonder that it saw destruction and darkness in Rorschach ink blots rather than bats and butterflies?

Of course, Norman was actively trained to make these types of disturbing connections. A similar algorithm that received gentler data points developed a far less psychotic viewpoint on life. In other words, output is completely and utterly dependent upon input.

When you use any kind of AI-based software or programming, remember the reality of bias. Your AI tools won't necessarily spot bias, so it's up to you to make sure incoming data isn't swaying your ML-fueled decision-making. Otherwise, you could waste time and resources targeting the wrong population with digital advertising or sending branding messages that could backfire.

Most individuals aren't well-versed in AI, which makes it difficult for them to put fear aside and understand how to use ML. By helping to automate office tasks like billing and accounting, AI is taking mundane tasks off workers' to-do lists, not eliminating jobs or scheming to destroy humankind. Rather than worrying about a generation of Normans hijacking your industry, read up on what AI can and cannot do. The deeper your education, the less afraid of AI you'll be.

AI isn't the next Chucky--it can't really think on its own.

Use the Norman experiment for education, not fearmongering. Regardless of scary AI scenarios like Norman, AI projects will continue. Therefore, it's up to you to embrace AI when it can potentially help grow your company. For instance, find out how AI might help you better understand your customers' mindsets to increase your conversion rates and close friction gaps.

Also, keep in mind that AI hasn't exactly developed the ability to engage in the Socratic method. It simply uses data without understanding its context from a human perspective.

Take into account another recent AI project -- an IBM robot that can debate like a champ. Project Debater can even incorporate jokes into its presentations, but it's not formulating theories and making decisions the way a human does. It's not going to think on its own beyond the data and algorithms humans have armed it with. Thus, even if it can win a forensic competition, it won't be spending its off-time figuring out how to turn humans into its slaves.

At the end of the day, what business leaders need are AI companions that help them make sense of big data rather than do the thinking on their behalf. Those same entrepreneurs, executives, managers, and programmers must remain vigilant and use AI as a tool, not as a replacement for human understanding.

The good news amid all of the Norman mania is that it's broadening the conversation about AI. Don't get caught up in the fear frenzy, though. Move past your initial shock to delve below the surface. Norman has a lot to teach us about how we can run -- or ruin -- an organization.

Published on: Aug 20, 2018