Have you ever been asked to tip yourself backwards and trust that the person behind you was going to catch you before you came crashing to the ground?

Most people can probably reflect on a time, either in school or during their career, in which they participated in a trust fall. While trust falls may be outdated, team building exercises are useful in establishing a sense of camaraderie and community within a group, and, just as important, they establish trust.

Trust is a fundamental principle in human relationships. Ultimately, we all know that we cannot effectively work or live with people we don't trust, and who don't trust us back. As we can all agree, trust is built over time, but eroded quickly. This couldn't be more true than it is for relationships among colleagues, customers, partners and others. In business, without trust, there is little foundation.

The same is true for humans and technology. In particular, artificial intelligence (AI) is quickly becoming engrained in the fabric of our lives, acting on our behalf. It helps us become more productive, maybe even helping us be happier and healthier. AI is taking on tasks that were traditionally done by humans in the workplace and otherwise, such as acting as our personal assistants and hiring our next co-worker, to driving our cars and assisting with our healthcare.

As this increasingly becomes the norm, we're establishing a partnership with AI that is rooted in mutual trust. This new social contract represents the need for not only humans to trust AI, but AI to trust humans.

Last week, the theme of our second Emotion AI Summit in Boston, MA, was Trust in AI. We talked a lot about what reciprocal trust between humans and machines really means, as well as how we as an ecosystem can build and deploy AI that is worthy of our trust.

 inline image

With 43 Emotion AI Summit speakers across different industries, such as automotive, advertising, human resources, social robotics, health and more, we had an exciting opportunity to design this new social contract between humans and AI. We set the rules for this partnership, and even offered practical examples of how it will better our workplace and business relationships.

Here are the five tenets that constitute this new social contract, including examples of why this matters for today's business leaders:

  1. The new social contract is reciprocal

There are many examples of why mutual trust between people and AI matter. One is that of semi-autonomous vehicles in which the car's AI must know when the human driver is not drowsy, intoxicated or otherwise distracted, before resuming control of the vehicle. During my keynote, I spoke about this example and others that will impact just how useful and engrained in our daily lives AI will come. This includes how customers increasingly interact with support via company chatbots, or how heavily healthcare professionals can rely on AI to enrich patient care.

On the panel, "Building Trust in Next Generation Vehicles," nuTonomy (acquired by Aptiv) President & Co-Founder Dr. Karl Iagnemma spoke about how trust can be enhanced by establishing a shared mental model between humans and AI, complete with shared values. In the same way that having emotional intelligence, or EQ,  helps professionals build better relationships and supports better decision making in the workplace, EQ and establishing a common ground between humans and AI will become hugely important.

  1. The new social contract is built on emotional intelligence

At the Summit, I also spoke about unfortunate instances in which this notion of trust in AI has been jeopardized. We've likely all seen the stories of chatbots generating racist speech on Twitter, and facial recognition not detecting darker skinned faces, especially of women, just to name a few.

We must be willing to learn from these mistakes and move forward. Dr. Peter Weinstock, the executive director and chair of the Boston Children's Hospital Simulator Program, spoke about "emotive medicine" as the "4th vital sign" and the role that emotion, particularly fear, anxiety, and stress, play when doctors and nurses make decisions. He and his team are using simulator technology and other science to decrease fears and increase trust, particularly in high-risk environments.

Trust between humans and AI will be strengthened in much the same way. By building algorithms with EQ, AI will have the ability to understand how someone feels and react accordingly, not unlike how a trusting relationship between two humans works.  

  1. The new social contract requires diverse teams

Diversity was a key theme at the Emotion AI Summit. Of the 43 speakers, over a third were women. The 250 attendees were also quite diverse and balanced in terms of gender, domain expertise and cultural background.

During the panel, "Ethics in AI: Addressing Diversity, Inclusion and Bias," Dr. Rumman Chowdhury, senior principal of artificial intelligence at Accenture, gave a great example of why even the teams must be diverse. During the winter in the Nordics, it's so cold that Apple iPhones often die when they're used outside. Those living in the region bring a charger with them wherever they go, so they can reboot next time they're indoors.

 inline image

Her point was that people solve for the problems that they know. Having to worry about temperatures so frigid that a phone turns off, is not a problem those living in Cupertino, CA, where Apple's headquarter is, typically face. And yet, many of Apple's customers live in frigid climates that are similar to the Nordics.

In the context of Dr. Chowdhury's example, it is important that we are inclusive when designing AI algorithms. Only when tech business leaders prioritize hiring those with diverse backgrounds and perspectives, will AI represent all of the communities that it wants to reach and serve.

  1. The new social contract requires diverse data

As a CEO, I take diversity very seriously, not only in our team, but also our data. Our technology detects subtle and nuanced human emotions and cognitive states both from face and voice, and we cannot afford for our algorithms to be biased and not accurately track people of different ages, genders and ethnicities. We take this into account when training our data, from how we acquire and annotate it, to how we build and validate the algorithms. This is not just an ethical imperative, it is also table stakes for running our business. Put simply, our Emotion AI technology will not work in the real world if it's biased.

Other business leaders should consider how diversity will improve their own R&D, product development and reputation. How could more diverse data, information and perspectives improve product functionality, the way you market to customers, and trust in your brand?

  1. The new social contract is ethical

We can draw inspiration of ethical AI in how the best brands build trust. Graham Page, director of offer and innovation at Kantar Millward Brown spoke at the Summit about how Amazon was one of the leaders in Kantar's 2018 BrandZ Top 100 Most Valuable Global Brands ranking and report. Despite their use of AI and access to lots of consumer data, Amazon's focus is providing utility to their customers

Kantar Millward Brown found that the most trusted brands are responsible with the way in which they handle data, such as being transparent about how customer data is being used, making data security a top priority, and never using data to exploit users and consumers. In other words, our favorite tech products, particularly those that are rooted in data and AI, are only as ethical and trustworthy as the company building them.

As you might be able to tell, last week's Emotion AI Summit was a day of lively discussions, in which we talked about what it means to trust AI and how we as an ecosystem can help make that happen. The way in which so many innovators and business leaders with diverse backgrounds came together to offer their perspectives on the topic was truly inspiring and left me feeling very hopeful and excited for the future.

I believe we have the power to be agents of change. I invite you all to ask questions, challenge each other, and work together as a community to help define the social contract that underpins human and AI relationships, and apply it to your business practices.

You might not be doing trust falls with AI at your next company team building day, but we'll get there.