I just got back from the World Economic Forum's Global Future Councils meeting in Dubai. I serve on the Future of Artificial Intelligence and Robotics Council, where we explore how AI will impact industry, governments and society, as well as propose governance models that ensure that the benefits of AI are maximized and the risks are understood and mitigated.

AI is quickly becoming mainstream and integrated into all aspects of our lives. There is now a vibrant ecosystem of corporations, startups, investors, researchers and ethicists who are applying AI. Governments like China and the United Arab Emirates are investing heavily in AI and see it as a competitive advantage.

In previous years, the topic that was top of mind for the council was addressing the public fear of AI's existential threat to humans. This is no longer the biggest concern. This year, our focus was on imminent risks such as ethics, safety, data privacy, bias and accountability of AI.

My own work falls into a subset of AI that is about building artificial emotional intelligence, or Emotion AI for short. In the not so distant future, we will interact with our devices the same way we interact with each other - through conversation, perception and emotion. Our devices will be emotion-aware and have empathy. Emotion AI has many applications - from increasing safety in automotive to helping people with mental health issues. At the same time, it brings with it risks that need to be addressed.

Today, AI ethics is the new "Green"

Over twenty years ago, a number of companies, including giants like Walmart, embarked on a journey to go green. Spurring controversy on whether this was a smart business decision for its shareholders, Walmart committed to minimizing waste and optimizing their products and operations while preserving natural resources. Not only has Walmart saved billions of dollars by going green, but they have also added entire new product lines, such as energy-efficient lighting that drove top line revenue as well. Moreover, they became the go-to retailer for environment-conscious consumers, giving them an edge over competitors.

What Walmart did was not only the "right" thing to do but also the smart thing to do. Many companies have since followed suit.

On the path to ubiquity of AI, there will be many ethics-related decisions that we, as AI leaders, need to make. We have a responsibility to drive those decisions, not only because it is the right thing to do for society but because it is the smart business decision.

Recently, Affectiva joined the Partnership on AI to Benefit People and Society. Founded by Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft, the Partnership on AI is a coalition of nonprofit and for-profit organizations that are committed to addressing the moral and social implications of AI. We are excited to be part of this organization, driving the agenda for AI ethics.

Here are 5 ways we prioritize ethics and why this is business-smart:

Make ethics a core value

Integrity and being ethical is one of Affectiva's core values. This means we hold the highest standards for all we do, especially in our science and products. We are respectful of people's privacy and ask for consent from people who share their emotions with us. We are transparent about how our algorithms work, what works well and what needs improvement. We do not engage with applications of our technology that do not match our core values.

This not only helps with acquiring and retaining talent but also sends a message to potential partners and investors that this is important to us, thereby building trust.

Implement ethical guidelines in how we develop and validate AI systems

At Affectiva, our algorithms use state of the art machine learning approaches including deep learning, transfer learning, reinforcement learning and more. It is important that one is purposeful and thoughtful about the data being used in the training and validation of AI algorithms. How do we ensure that AI teams that are designing and training these systems are wary of replicating social injustices that exist in society? Our team is developing internal guidelines to ensure we are not accidentally building bias into our algorithms. In addition, we do not treat the neural networks as a black box. Instead we look under the hood to understand what exactly is the neural network learning.

Make ethics, not just market size, a criteria for deciding on markets  

The applications of Emotion AI are endless. Emotion AI is being used for early diagnosis of depression and Parkinson's, for suicide prevention as well as to assist those on the autism spectrum. The automotive industry is also all over this as they look to redefine safety and the consumer experience within a vehicle. Educators are integrating this data to improve online learning outcomes.

But like any novel technology, there is potential for abuse. We often debate internally what applications we should focus on, and what opportunities we should say no to. Market size is not our only criterion. Equally important is whether that application meets our ethics standards. For example, we have turned down uses-cases that violate people's privacy and trust, such as in a surveillance application where people are not aware they are being monitored.

Engage in and lead the public discourse on AI ethics

Tim O'Reilly recently underscored how companies do not need Chief Ethics Officers - that this is the job of the CEO. As such, I feel that my and my team's responsibility is to help shape the future of this emerging category of AI.

In September 2017, we hosted the first-ever Emotion AI Summit - an industry event aimed at advancing the dialogue around the future of AI, especially as it pertains to building machines that understand humans. We had 30 speakers and over 300 attendees - companies, startups, researchers, designers and ethicists. During the day, we had one panel entitled "The Future of AI: Ethics, Morality and the Workforce," which was moderated by Eric Schurenberg, Editor in Chief and President of Inc. Magazine, to discuss the Future of AI in the context of ethics. Panelists included Richard Yonck, Futurist and Author of "Heart of the Machine", John C. Havens, Executive Director, The IEEE Global AI Ethics Initiative. This panel, which covered not only ethics but also labor discussions, was one of the the most popular panels.

Teach AI ethics to engineering and computer science students

Finally, I believe there needs to be a mandatory ethics curriculum in computer science, AI, robotics and engineering programs that ensures that the designers of next-gen AI systems and devices will be well aware of the implications of their work.

It's an exciting time to be in AI. Personally, I worry that AI increases the digital divide and inequality, but I am also a strong believer that AI can also decrease that divide if we prioritize the conversation around ethics and educate future generations on how to apply AI with social justice and broader good in mind.

Please reach out if any of these topics interest you. We're always keenly interested in collaborations that can raise the ethical standard for the AI industry as a whole.

Published on: Nov 15, 2017