Artificial intelligence work is bound up in myriad concerns about bias, discrimination, and other shades of prejudice. So if you run a business that builds A.I., how do you design an ethical system?
The organization Women in A.I. Ethics recently hosted a conversation with Milena Pribic, IBM Senior Designer for A.I., and here's her advice on how to do it right.
1. Dig Deeply into team practices.
On Pribic's team, efforts to create A.I. using an ethical approach begin with understanding how the team building the A.I. product works. This encompasses the overall effort, team members' ongoing behaviors, and the practices they've put in place to ensure that something like fairness "isn't just checked off a list," she said during the chat.
To ensure that a team is doing more than simply going through the motions, Pribic said it's important that the people on the team "really understand the context" and "the repercussions" of what they're building. She noted that one way teams can fail to develop that is when they don't have enough interaction with the end-user. One way Pribic fixes that is by putting data scientists and designers together to make sure the entire team understands the stakes and the concepts in the same way.
2. Look at team composition.
"Obviously, making sure that your team itself is inclusive," she said, is crucial in building ethical systems. "Inclusive representation is one thing, but in order to be truly inclusive, we need to have inclusive participation." That can mean ensuring that the right mix of people is actually in the room for certain meetings and decisions.
After all, Pribic asked, "What shot do we have to actually make sure that we're ensuring fairness and addressing bias if we're not doing that in this very room?"
3. Understand Outcomes.
As for the nuts and bolts of doing ethical A.I. work, Pribic said her team conducts a range of design exercises. As an example, she said, "it could be an exercise or two that lays out tertiary effects of what you're designing and what you're creating or stakeholder maps that will call out potentially someone who's been indirectly affected by A.I. that you haven't initially thought about."
Beyond that, they might do full workshops focused on fairness, or a broader discussion about the effects of particular products.