Science fiction futures generally come in two flavors -- utopian and dystopian. Will tech kill routine drudgery and elevate humanity à la Star Trek or The Jetsons? Or will innovation be turned against us in some 1984-style nightmare? Or, worse yet, will the robots themselves turn against us (as in the highly entertaining Robopocalypse)?
This isn't just a question for fans of futuristic fiction. Currently two of our smartest minds -- Elon Musk and Mark Zuckerberg -- are in a war of words over whether artificial intelligence is more likely to improve our lives or destroy them.
Musk is the pessimist of the two, warning that proactive regulation is needed to keep doomsday scenarios featuring smarter-than-human A.I.s from becoming a reality. Zuckerberg imagines a rosier future, arguing that premature regulation of A.I. will hold back helpful tech progress.
Each has accused the other of ignorance. Who's right in this battle of the tech titans?
A.I. expert to Musk: Sorry, but you don't know what you're talking about
If you're looking for a referee, you could do a lot worse than roboticist Rodney Brooks. He is the founding director of MIT's Computer Science and Artificial Intelligence Lab, and the co-founder of iRobot and Rethink Robotics. In short, he's one of the top minds in the field. So what does he think of the whole Zuckerberg vs. Musk smackdown?
In a wide-ranging interview with TechCrunch, Brooks came down pretty firmly on the side of optimists like Zuckerberg:
There are quite a few people out there who've said that A.I. is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don't work in A.I. themselves. For those who do work in A.I., we know how hard it is to get anything to actually work through product level.
Here's the reason that people -- including Elon -- make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn't.] When people saw DeepMind's AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, 'Oh my god, this machine is so smart, it can do just about anything!' But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].
Brooks also argues against Musk's idea of early regulation of A.I., saying it's unclear exactly what should be prohibited at this stage. In fact, the only form of A.I. he would like to see regulated is self-driving cars -- such as those being developed by Musk's Tesla -- which Brooks claims present imminent and very real practical problems. (For example, should a 14-year-old be able to override and "drive" an obviously malfunctioning self-driving car?)
Are you more excited or worried about the future of artificial intelligence?