If you have ever marveled about the power of computers to think, learn, and process information, you owe a debt of gratitude to Marvin Minsky. The long-time MIT professor spent his life pioneering advances in what we now know as artificial intelligence. Minsky died in Boston on Sunday at age 88. His family told the New York Times that the cause was a cerebral hemorrhage.

His achievements in the field of A.I. research and elsewhere are so vast, it is difficult to summarize them. The short list, taken from stories written about Minsky in the New York Times, Popular ScienceMIT Technology Review, and the New Yorker includes: 

  • In 1951, while studying mathematics at Princeton, he built the first learning machine. It was an artificial neural network built from vacuum tubes. He called it the Stochastic Neural Analog Reinforcement Calculator, or SNARC. It was, according to Popular Mechanics, "capable of machine learning at a time when most computers still ran on punchcards." 
  • In 1956, while at Harvard, he invented and built the first confocal scanning microscope, an instrument scientists still use to view detailed, clear images of microscopic samples. 
  • In 1959, he co-founded MIT's Artificial Intelligence Laboratory, which remains a force in A.I. research to this day. His co-founder, John McCarthy, coined the term "artificial intelligence." The MIT A.I. Laboratory, notes the Times, also "planted the seed for the idea that digital information should be shared freely, a notion that would shape the so-called open-source software movement. It was also part of the original ARPAnet, the forerunner to the Internet."

On the basis of these and other contributions, Minsky became a go-to expert on A.I., a thought leader before the term was fashionable. Very famously, director Stanley Kubrick sought his advice before making his 1968 classic, 2001: A Space Odyssey

Minsky also left quite a legacy as a teacher and mentor at MIT. Some of his more well-known students include Ray Kurzweil, the inventor and futurist, who in many ways has become the new go-to thought leader regarding artificial intelligence. There's also Danny Hillis, an inventor and entrepreneur, whose company, Thinking Machines, was once the market leader in parallel supercomputers. It was profiled by Inc. back in 1995, one year after it had taken a turn for the worse, and filed for Chapter 11. Gerald Sussman, an exceptionally decorated A.I. researcher in his own right and professor of electrical engineering at M.I.T. was also a student of Minsky's.

So, what lessons can you cull from Minsky's successes?

1. Challenge the status quo.

One is a reminder about how crucial it is to challenge authority--and to directly contest what passes for accepted wisdom, if you feel you know better. By all counts, Minsky possessed this tendency at a young age, and never lost it. The New Yorker profile of Minsky, written late in 1981, describes an incident from Minsky's childhood:

He recalls taking an intelligence test of some sort when he was about five. One of the questions was what the most economical strategy was for finding a ball lost in a field where the grass was so tall that the ball could not be seen immediately. The standard answer was to go to the center of the field and execute a spiral from the center until the ball was found. Minsky tried to explain to the tester that this was not the best solution, since, because you would have had to cross part of the field to get to the center in the first place, it would involve covering some of the area twice. One should start from the outside and spiral in. The memory of being unable to convince the tester of what appeared to Minsky to be an obvious logical point has never left him.

2. Simple doesn't mean simplistic. 

Another lesson from Minsky is about how important it is for innovators to use simplicity when describing complex topics. Minsky could explain detailed concepts in plain English that anyone (who speaks English) could understand. If you want to read a great example of how he did this, look no further than a research proposal that he, McCarthy, and two of their colleagues wrote in 1955. 

In unpretentious sentences, they explain why they need funding for a two-month, 10-person study of artificial intelligence. They spell out seven challenges in A.I. they are hoping to address, and they detail, down to the dollar, how they'd spend their grant money. 

For example, under a heading called "Automatic Computers," they wrote: "If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have."

That statement is no longer as true today as it was in 1955. And Minsky's life's work is a big reason why. 

Published on: Jan 26, 2016