Artificial intelligence is supposed to free the hiring process from prejudices and biases. We can have a totally neutral system that evaluates candidates and selects the best possible one, regardless of race, gender, or any other characteristic.
It sounds fantastic, but it's been an abysmal failure in that matter. Artificial intelligence is only as good as the programmers, who, of course, are actual humans with flaws. Amazon, which, of course, has gobs of money to pour into development, had to scrap its A.I. recruiting process because the bot didn't like women.
HireVue faces pressure from rights groups over its hiring systems, which, according to The Washington Post,
use video interviews to analyze hundreds of thousands of data points related to a person's speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates' skills and behaviors, including their "willingness to learn" and "personal stability."
It's these types of programs that have consultants in South Korea creating new business models--teaching people how to beat the bots.
This model of gaming the system has been in place for as long as people have applied for jobs. There are thousands of articles on the internet that tell you how to answer standard interview questions ("Where do you see yourself in five years?") or extol the virtues of a firm handshake. This is really no different than the training these consultants give. Except, instead of trying to convince a human, you're trying to convince a machine.
And that makes this training so much more valuable. I can tell you "firm handshakes are important!" and then you interview with someone who prefers the dead-fish version of shaking hands and my advice harms instead of helps. But if two companies use the same software, the information from these consultants will help you shine regardless of who the hiring manager is.
That's the goal, of course, to take the human biases out of interviews. But the biases still exist in A.I.--it's just that every job requires you to overcome the same preferences. Which means it will be easier to beat the system. Once the consultants figure out what the algorithms want, they can train you to respond the right way.
While it potentially levels the playing field, people who can afford training will do better in the interviews. Interviewers already discriminate on class, so this doesn't solve that problem at all.
Can artificial intelligence potentially make hiring better? Probably. But, as these consultants understand--anytime there is a system, there is a way to beat it. While humans are fallible, at least we all know they are. Artificial intelligence allows you to think the process is bias-free, but it's not. It just makes for consistent bias.