We've all heard the prophetic apocalyptic predictions for AI's future. Elon Musk has said that it's our "biggest existential threat" and has likened it to "summoning the demon." Other great minds are similarly vocal about their fears. The late Stephen Hawking said that AI could wipe out human race. Author James Barrat wrote a book whose title, Our Final Invention, has become a mantra of the anti-AI movement.

While we are clearly well advised to move forward with eyes wide open as we develop generalized AI, there's an even greater near term danger, imbuing AI with biases that perpetuate old attitudes and social norms. 

A recent UNESCO report, I'd Blush If I Could, developed with the government of  Germany and EQUALS (an organization encouraging the development of women's skills in technology) posed a simple question, "Why do most voice assistants have female names, and why do they have submissive personalities? ."

The title of the UNESCO report was the standard answer given by the default female-voice of Apple's digital assistant, Siri, when responding to derogatory gender-specific statements. According to  UNESCO the reason that digital assistants have these biases is that there are  "stark gender-imbalances in skills, education and the technology sector."

The concern here is that we are building technology that does more than simply perform calculations and execute a series of pre-scripted commands.  The applications, digital assistants, and AI bots that we are currently using are carrying with then the social norms that they have learned through the data they are fed and the interactions they encounter. 

Sometimes that's a relatively mild form of bias that has nothing more to do with AI than how it's marketed.

"What sort of social norms are we trying to perpetuate or create in the increasing interactions we and our children have with digital assistants?"

For example, the names of today's digital assistants, which are all females. Although Alexa's name refers to the library of Alexandria, it could just as easily have been named Alex. Siri's namesake is a Norse (Scandinavian) female translated as "a beautiful woman who leads you to victory." Microsoft's Cortana is named after an AI character from the game Haylo, which appears as a naked female holograph. 

Marketers who decide what resonates best with users most often claim that a female voice is much more engaging and marketable. They'll also claim that the personalities of their assistants are meant to come across as intelligent and funny. 

An Amazon spokesperson quoted in a Quartz article said "Alexa's personality exudes characteristics that you'd see in a strong female colleague, family member, or friend--she is highly intelligent, funny, well-read, empowering, supportive, and kind." 

However, looking at the responses that Alexa gave when tested by the article's author, all of the digital assistants came back with responses that could, at best, be characterized as either apologetic or apathetic. I can tell you unequivocally that it's not how I'd want my daughter to respond to the same comments.

While it's easy to understand why a digital assistant that puts up a fight, becomes indignant, or calls someone out for  harassing it (her?), it's worth asking the obvious question, "What sort of social norms are we trying to perpetuate or create in the increasing interactions we and our children have with digital assistants?"

The author Jim Rohn once worth that "You're the average of the five people you spend the most time with." In the case of digital assistants we could just as easily say that they are the average of the handful of developers, or the content that those developers have used to train the AI. 

Based on that it would seem that AI bias is inevitable, since it will only be as unbiased as the social context within which it learns. 

However, this is where it gets interesting.

What we seem to be missing in all of this conversation about bias is that social context is not just about gender bias. There are numerous global differences, some nuanced and some pronounced, that shape what's culturally acceptable as we go from one part of the world to another. We may not agree with these differing attitudes towards how we treat each other, but we clearly put up with all but the most egregious of them under the doctrine of national sovereignty. 

The bigger question, at least in my mind, is, "Should digital assistants be created with one set of values and norms that are exported from silicon valley and expected to be used around the world, or should they be fine tuned to localized behaviors, even the ones we consider aberrant?" 

Navigating that question forces us through a minefield of controversy--and it darn well should.

Maybe it's just the eternal optimist in me but the way I see it, AI raises an entirely new set of ethical conundrums that will up our game as humans. We will have to face the fact that we are training a new species that embodies who we are and what we value, and then holds it up to us like a mirror into ourselves and our beliefs. In many ways it is an opportunity for us to ask questions that push us towards what may be the final frontier of globalization.

That may not seem to be an apocalyptic threat, it's certainly not an uber-intelligent AI overlord that sets out to eradicate the human race as our last invention.

But it may well put us on an even more fascinating and challenging path; one that helps us evolve into better humans.