Ethics questions are as old as the first human who had to decide whether to club or welcome the neighbor from the next cave over who came visiting. But that doesn't mean that each generation doesn't face their own unique ethical quandaries.
When planes were first developed, some wrung their hands that they would make warfare unimaginably uglier; a few even called for the destruction of all planes. Then we figured out how to split the atom and even more terrifying questions emerged. How do you balance the benefits and terrors of a technology that can literally raze cities in an instant?
The bigger the technological development, in other words, the thornier the ethical quandaries it often creates. That was true since automobiles transformed our lifestyles while simultaneously killing thousands a day, and it's true in our current reality of mind-bending technological progress with entrepreneurs aiming rockets at Mars and envisioning artificial intelligence that makes humans look about as smart a house cat in comparison.
So what are some of the biggest ethical questions being raised by these new technologies? You need to look no further than the TED stage to hear about some of the thorniest. The TED Ideas blog recently rounded up talks on the subject, including these, which are guaranteed to make you think:
1. Should parents be able to edit their babies' genes?
"If you had a baby with a congenital heart defect and a doctor could remove the gene, would you do it to save your baby's life? Most people probably would. But take that another step further: Would you make your baby a little more intelligent? A little more beautiful?" asks Jennifer Doudna, co-inventor of the CRISPR gene editing technology, in this talk.
2. Should a driverless car kill its passenger to save five strangers?
Programming self-driving cars isn't just about teaching them to avoid obstacles and navigate intersections. It also involves teaching them how to make tough ethical choices.
"A driverless car is on a two-way road lined with trees when five kids suddenly step out into traffic. The car has three choices: to hit the kids, to hit oncoming traffic or to hit a tree. The first risks five lives, the second risks two, and the third risks one. What should the car be programed to choose? Should it try to save its passenger, or should it save the most lives?" and "Would you be willing to get in a car knowing it might choose to kill you?" MIT computational social scientist Iyad Rahwan considers in this talk.
3. What morals should we program into intelligent machines?
Super intelligent robots with no morals at all is clearly a bad idea, but then which morals do we program into them? "Who should decide which moral beliefs are the most 'right'? Should every country have to agree to a set of core values? Should the robot be able to change its own mind?" are among the touch questions considered by techno-sociologist Zeynep Tufekci.
Which technology makes you the most nervous when it comes to ethical concerns?