2016 was a banner year for artificial intelligence. Alpha Go's victory over Lee Sedol was perhaps one of the most important, but we saw advancements in self-driving cars, the continued embrace of bots and personal assistants for retail, adoption and competition around in-house assistants like Amazon Echo, along with frequent, sometimes weekly, breakthroughs on the academic side, mainly relating to machine learning. With the biggest tech companies in the world--Google, Facebook, Amazon, Microsoft, and others--devoting more and more resources to AI, the momentum is going to increase.
For those of us who've been in the field for a while, it's an incredibly exciting time. AI and Machine Learning have come to the fore in the recent past, but we believe that the next few years hold even more far reaching successes.
We will start to accept that AI can sometimes be wrong
In 1994, a man named Thomas Nicely found a bug in one of Intel's processors that could cause incorrect results when dividing a number. And though Intel calculated that the average user would hit this bug once every 27,000 years, press and consumers reacted very negatively. Intel had to eventually recall the processors at a cost of roughly a half billion dollars.
As humans, we fundamentally expect machines to be flawless, and Intel's processor violated this basic belief. AI is not only inherently non-deterministic--a well-trained model is mostly right, but sometimes randomly wrong--but being non-deterministic is actually the core strength of AI. AI can often solve problems for which people don't know how to program a precise solution. The more "human-level" problems (voice recognition, image recognition) AI solves, the easier it will get for humans to relate to machines making mistakes since we make similar mistakes.
This will help us adapt to this new world where machines make mistakes, just like us! (Not to mention it won't cost $475 million when they're wrong once in a great while). --Andreas Gal
Expect false negatives in the Turing Test
People ask me a lot about Turing Test. And what I keep saying is this question will simply not feel all that important as we start texting and chatting with bots at call centers or online. At some point, we will not even care who gets the job done, just so long as the job is done well. Meaning, if we're chatting with a customer service agent, it won't matter if it's a person or a bot, so long as our requests are handled. And increasingly, we won't know whether we're talking to a person or a machine. I suspect this means that the errors in Turing Test will not be false positives (labeling a computer as a person) but false negatives (interacting with a person while thinking they are actually a computer). --Nello Cristianini
Increasingly, AI will author itself
Recently, the biggest breakthroughs in AI have come about by virtue of AI experts and PhDs designing elaborate Deep Network architectures for tackling audacious problems. This trend will continue. But a new class of approaches will enlist AI itself to come up with the best architectures for tackling those problems. One approach showing early promise in this area is evolutionary computation (EC) which interestingly can also be used as a catalyst to massively distribute the process.
Keep in mind: EC has already been used to automatically generate regular computer programs (Genetic Programming), not AI programs. This approach will also continue gaining more attention since the result of Genetic Programming is human readable, which has helped various AI systems pass regulations recently introduced in Europe. --Babak Hodjat
AI will fundamentally alter the technology job market
For the last decade, a computer programmer was the highest in-demand and highest paid job profile in the digital economy. Every year, they write billions of lines of code, design and implement algorithms and programs to solve new problems. But AI will fundamentally change this. That's because AI models can be trained to solve arbitrary new problems, without writing a single line of new code. What that means for computer programmers is a changed landscape where some of their work ends up with AI or data scientists. --Andreas Gal
AI-designed products will come to market
Imagine if you were able to buy a pair of shoes that are best suited for you by simply interacting visually with an existing inventory of shoes. Imagine every action that goes into training a designer to know precisely what you like. And imagine those shoes are one of a kind, unique to your preferences and sensibilities. That's coming.
Today, AI has the ability to identify features of products in a brand's inventory. AI can also broker the interactions between consumers and the inventory, helping them discover the products they need. And, by tracking the feature sets most popular among different user groups, AI can come up with designs for new products that have a high likelihood of being popular to the public at large. --Babak Hodjat
The interface between AI and the world--including us--will keep on improving
In addition to working on the AI itself, 2017 will be the year when we really start tackling the interface between AI and the world. This will include conversational interfaces, speech, vision, question answering, text understanding, and a whole lot more.
Take bots for example. The technological underpinnings for bots are still improving, of course, but the interface is where we'll start seeing real differentiation. This means that "content" for bots (skills, capabilities) will become a strategic issue. Alexa has opened up its platform to third parties, meaning they can create skills for Alexa, and Google has hired comedians to improve the interaction with Assistant, which will also be in Google Home and Allo. The race to introduce content into voice conversational assistants will result into more meaningful interfaces, as the meaning ultimately depends on their connectivity. --Nello Cristianini
AI will enable devices to have digital "ears and eyes"
Humans perceive their environment principally with their eyes and ears. For machines to interact with us, to understand us, and to be easier understood by us, they will start relying more and more on the same senses we use to understand our surroundings. As an example, today, fire detectors use a radioactive isotope to detect smoke particles in the air. In the future, the same fire detector will likely have several cameras that use AI trained on detecting the sight of fire. This will lead to earlier detection and a safer world for humans. --Andreas Gal
AI running on devices will actually reduce privacy concerns
Imagine if your security system could tell when you walk in and not force you to hurriedly switch the alarm off every time. Currently, to make this work, in-home systems need to send raw data (say from cameras in the home) to the cloud in order to sense what is happening around them. But the transmissions of those images have very legitimate privacy concerns.
By processing and making decisions from the raw data at the point of need--instead of sending it to the cloud where the data can be vulnerable--AI can solve for those concerns. And as the processing power and memory available on IoT devices allow us now to run AI on top of our in-home devices, solutions like this will become more and more commonplace, as required response times for such decisions decrease, and applications encapsulate the decision/action loop locally. --Babak Hodjat
Regulation of AI will start becoming a topic
As 2017 sees more and more applications of AI, expect government to notice. After all, employment will start being affected by AI. Task crowdsourcing will continue to grow, and so will the automation of call centers, logistics, some aspects of retail. The concept shop developed by Amazon could be the wave of the future. Driverless cars will gain further acceptance. And, well, you get the idea.
As a result, regulators will start looking a bit closer, and pressure groups might start campaigning for closer regulation of automated technologies, not just for their impact on employment, but for a range of issues. Another issue triggering regulation might be the aftermath of the fake-news story: are we going to hold platforms like Facebook responsible for their content? And if so, what is going to be their technological response? Are we going to like a technology that enables them to remove content considered unsuitable? --Nello Cristianini