3 Things You Should Find Out Before Using A.I. at Your Company

A.I. has powerful potential in business, but before you implement it, you need to consider the risks and how to mitigate them.

EXPERT OPINION BY RANA EL KALIOUBY, DEPUTY CEO, SMART EYE @KALIOUBY

MAR 2, 2020
getty_984824408.jpg

Getty Images

The artificial intelligence (AI) race is well underway. The number of companies implementing AI has grown by a staggering 270 percent in recent years, and even companies that haven’t made the leap yet are thinking about it.

But, if you’re a CIO or business leader hoping to use AI — whether you’re developing your own technology in-house or licensing it from a firm — there are serious implications you need to consider.

The number one thing to look out for is the risk for bias. Unfortunately, we’ve seen many instances in which AI has been biased against minority groups. Not only is this unethical; it’s also bad for business. If AI can’t work for all people as it’s intended, there’s little benefit to using it in the first place.

So, if you want to make AI part of your business strategy, here are three key things to ask:

1. Ask about the data.

AI systems using machine learning are trained and tested with massive amounts of data. This data needs to be diverse and representative of the different people and use cases that it will touch — otherwise, it will not work correctly. Start by asking where the data comes from and how it’s collected, and think critically about areas where the data might be lacking.

Even if you have diverse, representative data, bias can still creep in if you don’t have a careful protocol for training and validating AI models. When training an AI algorithm, you want to make sure that the data is balanced for demographics (such as gender, age, ethnic diversity) but also appearance: is a person wearing glasses, a hijab, or a face mask? It’s crucial to train the algorithm with substantial data on each subpopulation.

That thinking needs to carry over as you’re validating AI, too. Oftentimes, people will report on a single accuracy score — for example, “my AI can recognize people some percent of the time.” But you need to break it down further, and evaluate performance based on how well the AI performs for different subgroups or populations — for example, “the AI works some percent of the time with men, but only this much percent with women. Only then will you be able to uncover areas where AI may be biased so that you can take steps to rectify it.

2. Ask about the team building the AI.

Mitigating bias hinges on diverse teams — after all, we build what we know. Even with good intentions, if a group of people developing algorithms come from similar demographics and backgrounds, they may unwittingly introduce bias. Only when teams are diverse can we say, “You know, I noticed that there isn’t enough data of people who look like me. Can we make sure we include that?”

My company Affectiva faced that in our early days. Our data labeling team in Cairo flagged that we — at the time — did not have any data of women wearing a hijab, which was a huge oversight. So we set out to add that to our dataset.

Diverse teams also have the potential to think of new use cases for technology that are representative of different groups, and to solve challenges for different groups of people. Not only is this the right thing to do, but it’s good for business, and key for moving the industry forward.

3. Ask about how AI will be deployed.

Solving the AI bias issue isn’t just a matter of building accurate systems. How it’s used is equally important. You need to make sure that, in the real world, AI will not introduce bias or have unintended consequences. 

Take law enforcement for example. Companies have designed AI to predict the likelihood that a defendant in a criminal case will commit another crime, in order to inform sentencing. But reports show that this technology is often biased against minority groups, with devastating results. Until the industry can ensure that AI systems will be accurate, representative, and deployed in a way that does not introduce bias, these use cases should be avoided.

The bottom line: Don’t wait until there’s an issue to address bias.

Safeguarding for bias can’t be a one-time thing. If your company is using AI, you need to continuously re-evaluate your protocols and ask tough questions to ensure you’re getting it right.

If you want to join the ranks of companies using AI, but you’re concerned about the risks, there are resources you can turn to. For example, the Partnership on AI brings together diverse, global voices to study and formulate best practices for AI technologies.

The AI race is only speeding up. Now, our approach to mitigating the risks needs to keep that pace. 

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

Inc Logo
This Morning

The daily digest for entrepreneurs and business leaders