The big problem with recruiting and hiring is that it's done by humans, and humans make mistakes and are subject to biases. So, the best solution is to create programs that evaluate candidates and choose the best person. Problem solved!

But, the latest research in artificial intelligence shows us that even robots have bias problems.

Researchers used a neural network called CLIP to control a robot's arm. The arm then divided blocks with people's pictures into different categories. Easy enough, right? But when researchers asked the robot to pick the doctors, janitors, murderers, and homemakers out of the person bedecked blocks, the robot demonstrated "toxic stereotypes."

Researchers found that the robot was more likely to pick a block with a Black man on it when asked for a criminal block and more likely to pick a man block over a woman block when asked for a doctor block.

This is not a robot you want to hire people, and here's why humans are currently better than the bots.

Stereotypes can be based on reality.

We all say stereotypes are harmful, but we all use them all the time to simplify things. If you're stuck at a dreaded cocktail hour with people you've never met before, whom do you pick to talk to first? It's certainly not random. You're using your past experiences to help you figure out who is most likely to be your newfound friend.

The robot was likelier to pick a male block when asked to pick a doctor, and it's easy to condemn the robot as sexist. The robot had no information about these blocks other than pictures--there was no way to tell who was a doctor and who was a criminal. But artificial intelligence "learns" by collecting data points, and women make up only 36 percent of practicing physicians. 

So for A.I. to pick up on the actual numbers and apply them makes sense from a logical perspective.

Humans can overcome stereotypes.

While you know that more men are doctors than women (although that is changing as women currently make up slightly more than half of medical students), we can stop ourselves from assuming the woman in the doctor's office is the nurse. We can train ourselves to flip assumptions to test them

Kristen Pressner's pioneering TEDx Talk introduced the "flip it to test it" concept, which helps you battle your inner desire to simplify things with stereotypes. Ask yourself, "If this person were another race/gender/age/whatever, would I approach this the same way?" It's a quick and easy trick we can all do.

Hiring is hard.

While this particular A.I. experiment wasn't designed for hiring, it's easy to see how a robot could fail at this. It also shows how difficult it is to evaluate people without knowing essential things about them. 

This is why it's essential to have clear guidelines in your hiring process to evaluate candidates based on their knowledge, skills, and abilities and rely less on stereotypes. ("Oh, this person will be great! He graduated from the same school I went to!") 

Overcoming stereotypes is hard for humans, but it looks like it's even harder for robots. Recruiters shouldn't worry about job automation just yet.