As recently as a couple of decades ago, artificial intelligence was little more than science fiction. When it started to become a reality, the excitement of it all was overshadowed by fear that smart machines would make humans obsolete.
Today, that fear is tempered by the nearly endless possibilities of AI. Companies like Amazon and Google have embraced innovative AI algorithms, and educators are using it to overhaul an increasingly burdened educational system. In Beijing, researchers have developed a system called BioMind that uses AI to diagnose cancer with unprecedented accuracy.
The possibilities may be endless, but the fear of AI still persists in numerous ways. You might love the convenience of telling Alexa what to do, but what if it's also listening to your children when it shouldn't be? If a Tesla vehicle is in an accident, how can you trust that the AI system isn't at fault? These fears are compounded by the fact that, unlike most other technologies, very few people can easily explain exactly how AI works.
AI systems are artificial neural networks, meaning they are computing systems that are designed to analyze vast amounts of information and learn to perform tasks in the same way your brain does. The algorithms grow through machine learning and adaptation, and sometimes even their initial designers don't always understand the specific ways in which they evolve.
The implications of letting artificially designed brains make critical decisions are profound. In addition to diagnosing cancer, AI algorithms could potentially be used to guide more comprehensive applications such as municipal project planning, provision of public services, and predictive crime models in urban areas. But, if you don't understand a machine's thought process, how can you trust its decisions in those domains?
AI still has a long way to go toward being fully trustworthy and free of bias. But the good news is that any trust you do invest in it isn't unfounded - because this tech is truly capable of making our lives better. As your relationship with AI grows, keep the following in mind as you determine just how much you should trust the robots:
1. Rather than taking jobs, AI has improved them.
Mistrust was a natural response to the prospect of AI taking over jobs; after all, people's livelihoods were at stake. But we've since learned that automating jobs often leads to different, more advanced opportunities for human employees. Rather than making humans obsolete, implementing AI is paving the way for them to broaden their skills. In fact, a recent Gartner report predicts that AI may eliminate up to 1.8 million jobs by 2020, but it will create 2.3 million additional positions.
So, remember: No matter how much potential AI holds, humans are the ones tapping into it. That means the more AI takes over, the more roles will open up for humans to optimize and maintain it. Or as Gartner's research director, Manjunath Bhat, puts it, "Robots are not here to take away our jobs, they're here to give us a promotion." He predicts that positive impact will continue to be the norm for AI's future.
2. Humans are still accountable for AI systems.
The key to building trust is time, transparency, and accountability, especially for technology that's designed to think like humans. Nevertheless, trust can evolve over time. K.R. Sanjiv, CTO at Wipro Limited, emphasizes that until AI is fully explainable, humans will remain accountable. For instance, doctors interpret their patients' AI-derived pathology reports, and airplane autopilot systems alert human pilots to take over in emergencies. "In each of these cases," Sanjiv explains, "we allow humans to resolve the uncertainty."
In addition, keep in mind that AI algorithms are designed to think like humans. Much like your own brain, the appropriate algorithm will cultivate vast stores of data and identify patterns to predict the future. Many attempts are unsuccessful or unsatisfactory because the humans building the system input biased or inaccurate information. AI is smart, but like all technology, it's a tool -- one that you can trust to do what people tell it to do.
3. People can come to trust robots by getting to know them.
Even with enough time and study, you may never know the specifics of how an AI algorithm grows and learns. That uncertainty is why many people still don't trust AI, and why some organizations have abandoned their attempts to adopt it rather than work out the kinks. But you can learn to trust the tech more by understanding on some level how AI works and by using it regularly.
Companies that utilize AI are working toward this goal by becoming more transparent about what decisions the technology makes for their customers, and how. Research backs up this approach; one study even found that people who were given the ability to modify an algorithm were more satisfied with the robot's resulting decisions and said they were more likely to use AI again.
Artificially intelligent personal assistants, diagnostic devices, and automobiles are normalizing AI enough that most of the initial fears of the technology are fading. As you get to know it better, AI may go from something you're not quite sure about to becoming your greatest asset.