Artificial intelligence has an ageism problem. 

That's according to a recent World Health Organization policy brief explaining that data used by A.I. in health care can be unrepresentative of older people. A.I. is a product of its algorithms, the brief explains, and can draw ageist conclusions if the data that feeds the algorithms is skewed toward younger individuals. This could affect, for example, telehealth tools used to predict illness or major health events in a patient. It could also provide inaccurate data for drug development. Ultimately, not including older adults in the development process for A.I. can make it harder to get them to adopt new A.I. applications in the future.

"To ensure that A.I. technologies play a beneficial role, ageism must be identified and eliminated from their design, development, use, and evaluation," Alana Officer, unit head, Demographic Change and Healthy Ageing at WHO, writes in a summary of the report. She added that the biases of society are often replicated in A.I. technologies.

Here are eight ways to make sure A.I. doesn't discriminate against older consumers, as listed in the WHO policy brief.

1. Include older consumers in the design of A.I. technologies.

When developing any A.I. technology, make sure you have older people participating in any focus groups and in giving product feedback.

2. Hire age-diverse individuals for data science teams.

Hire and train data scientists of all ages on your team. By including older employees, they'll be more likely to recognize and identify any forms of ageism in data collection or in the product's design. 

3. Conduct age-inclusive data collection.

When choosing demographic data to feed into A.I. algorithms--as with other personal identifiers such as race or gender--make sure people of varying diversity are accounted for.

4. Invest in digital infrastructure and digital literacy.

After a product that incorporates A.I. is developed, it's important to invest in education and accessibility initiatives. This can help make older consumers--and their health care providers--more likely to benefit from the technology.

5. Give older consumers the right to consent and contest.

Technology should benefit humans, not the other way around. Make sure that it's easy and clear for older people to exercise their choice in participating in data collection or to provide any personal information. 

6. Work alongside governance frameworks and regulations.

The policy brief recommends various government agencies to help create frameworks and procedures to prevent ageism. It also lists private businesses to work with on compliance for existing regulations.

7. Stay up to date on the new uses of A.I. and how to avoid bias.

With the rapid development and creation of new technologies, it's important to keep researching and understanding how A.I. can create new and unintended biases.

8. Create robust ethics processes.

In the development and application of A.I., it's important to formalize processes like the ones above to maintain accountability in creating equitable and inclusive products.