There's a great debate going on in Silicon Valley about artificial intelligence and unfortunately the stakes are rather high: Will we accidentally build a super smart A.I. that turns on us and kills or enslaves us all?

This might sound like the scenario of a summer disaster movie, but it's has worried some pretty big names, from Elon Musk to the late Stephen Hawking.

"Let's say you create a self-improving A.I. to pick strawberries," Musk has said, explaining his fears, "and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever." Humans in the way of this strawberry-pacalypse would be an just an expendable irritant to the A.I.

But surely humans wouldn't be so silly as to accidentally design an A.I. driven to turn all of civilization into one giant berry farm? Perhaps not, but as Janelle Shane, a researcher who trains neural networks, a type of machine learning algorithm, recently noted on her blog, A.I. Weirdness, it's entirely possibly they could do it by mistake.

In fact, it would be far from the first time that humans have thought they were building robots for one task only to turn around and find the robots were gaming the system in ways they never intended. The fascinating post digs into the academic literature to share several examples of robots gone wild. They are funny, clever, and, taken together, more than a little creepy.

1. Who needs legs when you can tumble?

"A simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance," writes Shane.

2. A robot that can can-can.

"Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so -- once again -- the robots evolved to be very tall," explains Shane. "The programmer tried to solve this by defining jumping height as the height of the block that was originally the 'lowest.' In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can."

3. Hide the test and you can't fail it.

"There was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted," Shane relates. 

4. Math errors beat jet fuel.

"In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness," says Shane. Hey, that's cheating!

5. An invincible (if destructive) tic-tac-toe strategy.

Once a group of "programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board," Shane notes. "One programmer, rather than designing their algorithm's strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm's strategy was to place its move very, very far away, so that when its opponent's computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game."

6. No useful game glitch goes unexploited.

"Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points," says Shane. 

7. Sorry, pilot.

This example is super high on the creepiness scale: "There was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a 'huge' force, it would overflow the program's memory and would register instead as a very 'small' force. The pilot would die but, hey, perfect score."

So are we all doomed?

All of these put together suggest that humans are pretty lousy at guessing how robots will solve the problems we set for them, or even how they'll define the problems. So does that mean Shane is just as worried about accidentally building homicidal A.I. overlords as Musk is? Not really, but not because she's sure human programmers really have a great handle on the robots they're creating. Instead, she's banking on robot laziness to save us.

"As programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there's another, easier route toward solving a given problem, machine learning will likely find it," she observes. "Fortunately for us, 'kill all humans' is really really hard. If 'bake an unbelievably delicious cake' also solves the problem and is easier than 'kill all humans,' then machine learning will go with cake."

5 Times Elon Musk Made Everyone Terrified of Artificial Intelligence
Published on: Jun 14, 2018