Last week, I was in LA for the premiere of a new AI documentary, "Do you trust this computer?" (See video below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big wigs like Elon Musk, Stuart Russell, Andrew NG and writers Jonathan Nolan and John Markoff. Elon Musk kicked off the evening with director Chris Paine, emphasizing how AI was an important topic that could very well determine the future of humanity. The excitement in the air was palpable. I was one of seven "AI experts" who were to be invited on stage after the screening for a Q&A session with the audience. Shivon Zilis, Project Director of OpenAI and myself were the only women.
The documentary did an excellent job surveying the research and applications of AI, from automation and robots, to medicine, automated weapons, social media and data, as well as the future of the relationship between humans and machines. The work my team and I are doing provided a clear example of the good that can come out of AI.
As I watched in my seat, I could hear the audience gasp at times, and I couldn't help but notice a couple of things: for one, there was this foregone assumption that AI is out to get us, and two, this field is still so incredibly dominated by men - white men specifically. Other than myself, there were two other women featured--compared to about a dozen males. But it wasn't just the numbers--it was the total air time. The majority of the time, the voice on screen was a male. I vowed that on stage that night, I would make my voice heard.
Here are some of my key thoughts coming out of the premiere and dialogue around it:
1. AI is in dire need of diversity.
The first question asked from the audience was, "Do you see an alternative narrative here--one that is more optimistic?" YES, I chimed in quoting Yann LeCun, head of AI research at Facebook and professor at NYU: "Intelligence is not correlated with the desire to dominate. Testosterone is!" I added that we need diversity in technology--gender diversity, ethnic diversity, and diversity of backgrounds and experiences. Perhaps if we did that, the rhetoric around AI would be more about compassion and collaboration, and less about taking over the world. The audience applauded.
2. Technology is neutral--we, as a society, decide whether we use it for good or bad.
That has been true throughout history. AI has so much potential for good. As thought leaders in the AI space, we need to advocate for these use cases and educate the world about the potentials for abuse so that the public is involved in a transparent discussion about these use cases. In a sense that's what is so powerful about this documentary. It will not only educate the public but will spark a conversation with the public that is so desperately needed.
My company, Affectiva joined leading technology companies in the Partnership on AI--a consortium of companies the likes of Amazon, Google, Apple, and many more, that is working to set a standard for ethical uses of AI. Yes, regulation and legislation are important, but too often that lags, so it's up to leaders in the industry to spearhead these discussions and action it accordingly. To that end, ethics also needs to become a mandatory component of AI education.
3. We need to ensure that AI is equitable, accountable, transparent and inclusive.
The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems. Unfortunately today, many are accidentally building bias into AI systems that perpetuate the racial, gender, and ethnic biases existing in society today. In addition, it is not clear who is accountable for AI's behavior as it is applied across industries. Take the recent tragic accident where a self-driving Uber vehicle killed a pedestrian. It so happens that in that case, there was a safety driver in the car. But who is responsible: the vehicle? The driver? The company? These are incredibly difficult questions, but we need to set standards around accountability for AI to ensure proper use.
4. It's a partnership, not a war.
I don't agree with the view that it's humans vs. machines. With so much potential for AI to be harnessed for good (assuming we take the necessary steps outlined above), we need to shift the dialogue to see the relationship as a partnership between humans and machines. There are several areas where this is the case:
- Medicine. For example, take mental health conditions such as autism or depression. It is estimated that we have a need for 15,000 mental health professionals in the United States alone. That number is huge, and it doesn't even factor in countries around the world where the need is even greater. Virtual therapists and social robots can augment human clinicians using AI to build rapport with patients at home, being preemptive, and getting patients just-in-time help. AI alone is not enough, and will not take doctors' place. But there's potential for the technology, together with human professionals, to expand what's possible with healthcare today.
- Autonomous driving vehicles. While we are developing these systems, these systems will fail as they keep getting better. The role of the human co-pilot or safety driver is critical. For example, there are already cameras facing the driver in many vehicles, that monitor if a human driver is paying attention or distracted. This is key in ensuring that, in a case where a semi-autonomous vehicle must pass control back to a human driver, the person is actually ready and able to take over safely. This collaboration between AI and humans will be critical to ensure safety as autonomous vehicles continue to take the streets around us.
5. AI needs Emotional intelligence.
AI today has a high IQ but a low EQ, or emotional intelligence. But I do believe that the merger of EQ and IQ in technology is inevitable, as so many of our decisions, both personal and professional, are driven by emotions and relationships. That's why we're seeing a rise in relational and conversational technologies like Amazon Alexa and chatbots. Still, they're lacking emotion. It's inevitable that we will continue to spend more and more time with technology and devices, and while many (rightly) believe that this is degrading our humanity and ability to connect with one another, I see an opportunity. With Emotion AI, we can inject humanity back into our connections, enabling not only our devices to better understand us, but fostering a stronger connection between us as individuals.
While I am an optimist, I am not naive.
Following the panel, I received an incredible amount of positive feedback. The audience appreciated the optimistic point of view. But that doesn't mean I am naive or disillusioned. I am part of the World Economic Forum Global Council on Robotics and AI, and we spend a fair amount of our time together as a group discussing ethics, best practices, and the like. I realize that not everyone is putting ethics in consideration. That is definitely a concern. I do worry that organizations and even governments who own AI and data will have a competitive advantage and power, and those who don't will be left behind.
The good news is: we, as a society, are designing those systems. We get to define the rules of the game.
AI is not an existential threat. It's potentially an existential benefit--if we make it that way. At the screening, there were so many young people in the audience watching. I am hopeful that the documentary renews our commitment to AI ethics and inspires us to apply AI for good.