Subscribe to Inc. magazine
LEAD

What Nate Silver's Bad World Cup Prediction Says About Big Data

Nate Silver completely blew his prediction for the winner of the Germany vs. Brazil game. It's time for some data realism.

Nate Silver became a popular data geek star during the 2012 presidential election. He got it all right. No one else came close. And when he left the New York Times and moved to ESPN, many expected the mantle of infallibility to follow him.

So when his blog FiveThirtyEight blew a major World Cup prediction, the reactions were strong and merciless. Some said that the competition was no place for big data, which can't understand the intrinsic issues and subtleties that real soccer fans see. Others claimed that Silver ignored some basic data issues.

But this isn't the end of big data in sports or in business. Hopefully, it can be for many entrepreneurs the beginning of a better understanding of what predictive data does and does not do.

When theory and reality collide

The hype over big data is to see patterns and then make predictions and decision based on the information. In this case, FiveThirtyEight has built predictive data models for a variety of things. One subject has been soccer matches. The World Cup is big business, inspires a lot of fan frenzy, and a good amount of betting. Knowing what will happen before it does would make a lot of people happy.

In the semifinal match between Germany and Brazil, FiveThirtyEight's model predicted Brazil had a 65 percent chance of winning. The actual outcome not only had German on top, but by a 7 to 1 score. In the world of soccer, that's a blowout. According to Silver's model, the chance of Brazil losing by at least six goals to Germany was 0.025 percent.

Oops.

Eric Goldschein at the blog SportsGrid said the result proved that "sports predictions, odds and data analysis are [BS]." Alex Massie at the New Republic claims that sometimes crunching numbers can't offer insights.

Silver's "mistakes"

Zeynep Tufekci, an academic who studies the "interactions between technology and society," said that Silver's team made some basic mistakes:

  • They used a statistic that overestimated the strength of Brazil's defense because much of its success allegedly came from uncalled fouls.
  • There was no review by "substantive area experts" or "qualitative pull-outs" of sample data to recognize basic errors in how things were measured.
  • Silver and company underestimated the psychological effect that Brazil's loss of two good players had on those remaining.
  • Individuals or small groups of people don't react as predictably as statistics would predict.

In short, previous bad refereeing in games involving Brazil masked the signs of weakness that should have informed the modeling. And even without that consideration, you can't assume that people will always behave the way you think.

Getting to a new data reality

Let's start with the utter naysayers. A comparison to baseball, which Massie made, was interesting. That game has shown that data analysis can be helpful, as the book Moneyball discussed. However, there is a limit to what you can predict. Statistics works over large numbers, not individual instances. The techniques that might help bring a team to the World Series wouldn't necessarily help in the handful of games that decide the final outcome of a season.

Of course statistics in a game like soccer--or in a business endeavor--can be useful. You might see patterns about the overall game or market. Statistics of a team or even one player over time might uncover strengths and weaknesses and then do more by showing their factual, not presumed, importance to winning. Furthermore, a 65 percent chance of winning doesn't mean you necessarily win. It means that, if you could replay the same circumstances 100 times, 65 of the outcomes would offer one result. However, you could also get one of the 35 others.

Tufekci is correct that predictions are no better than the quality of data and model that you employ. The latter is actually where she makes a possibly incorrect presumption that FiveThirtyEight didn't use subject experts or analytic techniques to uncover potential problems. Maybe they did and incorrectly but rationally thought that the necessary correction was too large.

Put differently, if all the games up until a given point were badly refereed, is it really smart to assume that suddenly everything will change? Maybe it is, given the high profile nature of an international competition. There were some significant complaints about poor officiating during the competition.

All this comes down to the single takeaway for entrepreneurs. Data science has a lot of art to it and does not guarantee what will happen. Big data and predictive techniques are supposed to inform smart decision making, not automate it. If you've got the available resources then, certainly, use data techniques. Only, don't fetishize the results or elevate analysts to the status of high priests. In your company, you are responsible for the ultimate decision. Not a lieutenant or a machine.

More:
Last updated: Jul 11, 2014

ERIK SHERMAN | Columnist

Erik Sherman's work has appeared in such publications as The Wall Street Journal, The New York Times Magazine, and Fortune. He also blogs for CBS MoneyWatch.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.



Register on Inc.com today to get full access to:
All articles  |  Magazine archives | Livestream events | Comments
EMAIL
PASSWORD
EMAIL
FIRST NAME
LAST NAME
EMAIL
PASSWORD

Or sign up using: