Theoretical physicist and author Stephen Hawking recently said that "artificial intelligence could be the greatest disaster in human history, unless humans learn to mitigate the risks posed." He joins other notable public figures who have warned of the dangers of artificial intelligence, with many fearing it may replace jobs and, to Hawking's extreme, the entire human race.
But this is a misleading scare tactic not based in truth. This is an emotional reaction to a misunderstanding of what artificial intelligence is, as most people only have a surface-level understanding of the technology. The robot takeover that Hollywood movies generally use to showcase artificial intelligence just isn't true.
Artificial intelligence has a very broad definition and two distinct forms. First, Artificial Narrow Intelligence (ANI) is focused on specialized areas and aims to improve human productivity and efficiency. For example, a startup called MintLabs uses ANI to visually recognize key features and quickly analyze a large volume of brain images, with accuracy that is just as high as that of the most experienced neuroimaging doctors.
ANI most closely defines the current state of artificial intelligence as we know it today. The second form, Artificial General Intelligence (AGI), more closely resembles replicating human-level intelligence, which is still likely many years away. In contrast to ANI, which is only applicable to a specific task, AGI can be used for all cognitive tasks. For example, a child, even though its brain still in developing, can not only identify different objects, recognize spoken words, but also can learn to play hide and seek, or how to ride a bike, etc. We are not at the point yet where artificial technology can replicate this type of overall intelligence.
So let's not mistake the current state of artificial intelligence for anything more than what it really is. In its current state, artificial intelligence isn't an actual threat to humans. The current deep learning algorithm of artificial intelligence, which accounts for most of the recent excitement in AI, and its system design pose little danger or risk to humans.
This is because humans are more or less in charge of and in control of how artificial intelligence develops and advances. Those involved deeply in artificial intelligence know how far off we are in developing an advanced deep learning algorithm that can mimic human intelligence, but to the average person this isn't common knowledge. Instead, the Hollywood image of artificially intelligent robots with the power to take over the world and the human race as we know it prevails.
All of us in the artificial intelligence and technology industry have a responsibility to raise awareness about what artificial technologies are actually capable of based on their technological makeup.
People often describe deep learning as an imitation of the brain, but the difference between current deep learning and actual human intelligence is huge.
To better understand this difference, I'll break down the deep-learning system into a few main components. First, a deep-learning system must develop a target function and those functions must be learned. For example, for self-driving cars, the target function is a mapping of the surrounding environment to be able to make a choice on where to move. Secondly, training data needs to be developed, which is a set of labeled data points to help teach and enforce the target function. So when Google Cars are out on the road today, but not available for purchase, it's because the cars are collecting labeled training data. This label training data then helps teach the car the right moves on any road conditions.
Thirdly, the actual performance element, which is the component of the system that takes some action, must also be defined. For example, the car should stop immediately when there is pedestrian crossing the road. Finally, there is a hypothesis space to define other, possible functions.
Defining the Progression
Current success for deep learning mainly lies in supervised learning, where a large set of data is labeled for training. It's important to point out, that all the above elements of deep learning are manually prepared and they are outside the control of the "artificial intelligence program," and strictly curtailed by this system design. Therefore, the artificial intelligence program cannot modify any of the components of the system, and cannot learn a function outside its hypothesis space. For this reason, an artificial intelligence program such as MintLabs, cannot learn how to autonomously analyze other types of health images.
This unsupervised learning, which is more closely related to human intelligence, does not require a lot of labeled data and is much more difficult to pull off. This means, for example, that a robot would need to see millions of cat pictures before understanding what a cat is. We, as humans, do not need to see millions of pictures of a cat to understand what it is. To date, there is no effective algorithm to solve this process of unsupervised learning and there really isn't a roadmap to develop one.
The path to imitating true human intelligence is quite far off. Computer scientists would need to have a significant breakthrough related to the algorithms beyond adding more data points or even computational power. Robots aren't going to develop a mind of their own any time soon and they aren't going to be able to make decisions outside of their 'learned' functions.
The general population needs to know that the current state of deep-learning algorithms and systems poses little danger to being "the greatest disaster in human history" as described by Stephen Hawking. We're still far from the real danger described in science fiction, if that were to ever even come.