Relying on artificial intelligence in your business comes with some serious responsibilities. Just ask Mary Gray, a Harvard anthropologist and senior principal researcher at Microsoft Research who this week stressed the importance of collecting data mindfully when building A.I.--and how failing to do so can result in social injustices. Gray was speaking at the Conference on Neural Information Processing Systems about the relationship between A.I. and social justice. 

"Data," she cautioned to the online audience, "is power."

Gray recounted a study published by a group of Cal-Berkeley researchers in 2019 that found racial bias in artificial intelligence widely used by the health care industry. The group studied a software program, Optum, which is owned by insurance firm UnitedHealth and uses algorithms and A.I. to predict which patients will benefit from extra care. Health care professionals rely on the software to guide their decision-making regarding who receives what treatments.

The researchers pored through nearly 50,000 medical records and found that the software had recommended Black patients for additional care about half the time they should have, while white patients were recommended for additional care at a far higher rate. The reason, Gray explained, was that the algorithms factored in medical histories in predicting how much each patient was likely to cost the health care system if left untreated. This meant that white patients, who typically have better access to health care due to a variety of factors rooted in systemic racism, were given priority for certain treatments.

In other words, said Gray, the way the data was collected and organized perpetuated racial disparities. Though the Berkeley researchers focused only on one particular tool, they found the same disparity across 10 different algorithms used throughout the health care industry. According to the study, which was published in Science, these algorithms are applied to a combined 200 million people each year. 

After revealing their findings to UnitedHealth, the researchers worked with the company to generate a new algorithm that predicted future health--such as the likelihood of a condition flaring up--instead of expected future costs. In all, they were able to reduce the disparity of care between Black and white patients by 80 percent.

"By adding more data into the model," said Gray, "they found a scalable, mathematically sound way to reduce racial disparities and increase social justice."

The example is a lesson for all business owners who create or rely on algorithms and A.I. Industries from real estate to finance rely on algorithms to make decisions that impact customers' lives. In some cases, the biases they perpetuate aren't visible on the surface, since they're formed at the data collection stage.

"Because data has become so powerful," said Gray, "it is imperative that we make it our collective responsibility to transfer the tools from engineers and to the communities and members of society carrying the benefits of what we can build."