I'm about to seriously challenge your ability to envision the future. If you're up to it read on.
If you've been following the work of Boston Dynamics (currently owned by Softbank) you've probably seen some of their four legged and wheeled robots which are able to navigate all sorts of obstacles and remain standing after being kicked, shoved, and pushed. While some of these robots, such as their BigDog, WildCat, and Spot appear to have an amazing ability to mimic an animal's gait. However, last year they introduced a two-legged anthropomorphic robot called Atlas, which was based on a more primitive biped called Petman.
When I first saw Atlas I was impressed by its (his?) ability to perform some basic human-like tasks, such as picking up objects and resisting a human's attempts to knock it over. Still, it most often looked as though it would have a tough time passing a field sobriety test when it attempted to traverse even moderately rough terrain.
Things are changing fast.
Boston Dynamics just released another video of Atlas in which it navigates elevated objects put in its way. If you don't feel just a bit creeped-out watching this then you might at least feel somewhat inadequate--at least until the 50 second mark in the video; that will make you feel much better.
The first thing that I thought of after seeing Atlas running and jumping was a StarWars-like image of these things in droves on a battlefield. It's no surprise that the MIT spinoff, which was first acquired by Google X and then by Softbank, received much of its early funding from DARPA.
Slaughterbots: The New Arms Race
With the acceleration of AI, autonomous devices, and robots we're obviously entering a new arms race, and this one has no visible finish line; creating a sometimes frightening view of the future. So, what can we do? What should we do? It's a question that many people figure will sort itself out. I seriously doubt that because of both the pace of innovation in these areas as well as the degree to which these technologies can do harm in ways that utterly ignore borders and perimeters of any sort, and the degree to which even a very small group or individual can do massive harm.
In 1942 Isaac Asimov introduced us to his three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Since that time these have been referenced in thousands of works and have become a mantra of robot proponents. The laws seem simple enough on the surface. So, is this the answer? Most definitely not. In a post for the Brookings Institution, Peter Singer points out that already absurd fiction behind these laws, "You don't arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!"
Consider how you would enforce Asimov's three laws today. If you built a robot with the intention to harm another human is there some sort of fail-safe built into every silicon chip that will prevent the robot from causing harm? Do programming languages such as Python have a standard piece of code that every software application must run to insure it isn't harming people? Of course not!
This is a ridiculous approach. It may sooth our conscience or calm our anxiety to believe that a simple set of 3, 4, or 5 rules can govern the evolution of AI but it would be an illusion. Human nature isn't going to change. The challenge isn't just finding ways to eliminate the harm AI and robots can do, but to leverage these same technologies to the greatest degree possible so that we do significantly more good than we do harm. Is that too fatalistic of a viewpoint? I don't think so. In fact, it may be the only viable viewpoint. Anything else is just pacifying us into inaction.
That doesn't mean we should sit idly by and allow the development of AI and killer robots any more so than we condone the use of chemical warfare and nuclear weapons. Earlier this week a group convened in Geneva to discuss the banning of fully autonomous weapons. The group is part of a larger coalition of 125 nations, which in 1980 formed the Convention on Conventional Weapons (CCW), "a framework treaty that prohibits or restricts certain weapons considered to cause unnecessary or unjustifiable suffering." I first found out about the group after watching the video below "slaughterbots."
Are slaughterbots part of the inevitable future of AI and robots? It would be naive to say "absolutely not." There's little doubt that any sufficiently advanced technology will be used to do harm. Our track record on that last point has been pretty consistent throughout history.
However, this is where I'd like to challenge your view of the future for just a moment.
Assume that AI and robots will result in injury and death for humans. In addition, let's assume that the human toll is one that could have been totally avoided without the advent of AI. Still with me? Good. Now, here's my question to you. Can you envision a degree of benefit and enough human value to offset that cost? Your initial reaction will be "of course not, every life has value and is worth saving." Agreed! So, why do we allow automobiles to kill nearly 1.5 million people and injure another 50 million people each year globally? Why do we put up with electricity which kills about 400 people per year, in the US alone? Why do we allow planes to fly if 30,000 people have been killed in nearly 2,000 aircraft incidents since 1959?
The answer is an easy one from where we stand today. Because these are each essential technologies which have contributed to, saved, and improved the lives of billions more people. The math always makes sense in retrospect. However, I would venture to say that if you'd recited these same statistics to someone in 1917 it just wouldn't add up; they would have found more than enough reasons to ban cars, planes, and electricity.
And that's precisely the challenge of envisioning the future. We constantly look at the future through the lens of the past, seeing only the reasons not to move forward. In short, it's much easier to project the threat of any technology over its benefits. As economist Paul Romer once said, "Every generation has perceived the limits to growth that finite resources and undesirable side effects would pose if no new recipes or ideas were discovered. And every generation has underestimated the potential for finding new recipes and ideas. We consistently fail to grasp how many ideas remain to be discovered. Possibilities do not add up. They multiply."
And it's in that multiplication, the strange and wonderful mathematics of progress, that the future always brings far greater benefits than we can possibly imagine.
Hard to envision, isn't it? Only if we fail to do the math correctly.