Challenging the Wisdom of Crowds
If you've always been a skeptic about the so-called wisdom of crowds, you've got some great new ammunition.
A recent paper in the journal Proceedings of the Royal Society (I'm not making up that title) by Iain Couzin, a professor in Princeton's Department of Ecology and Evolutionary Biology, and Albert Kao, his student, argues that "the conventional view of the wisdom of crowds may not be informative in complex and realistic environments, and that being in small groups can maximize decision accuracy across many contexts."
By "small groups," Couzin and Kao mean fewer than a dozen people. Specifically, Couzin told Drake Bennett in BloombergBusinessWeek that "there's a small optimal group size of eight to 12 individuals that tends to optimize decisions."
'Wisdom of Crowds' as a Contemporary Concept
As Bennett points out, the "wisdom of crowds" phrase comes from New Yorker writer James Surowiecki, whose book about the concept came out in 2005. Ten years later, it's still relevant to us, as Bennett explains:
Its thesis is nicely summed up in its opening, which describes the 19th-century English scientist Francis Galton's realization, while attending a county fair, that in a competition to guess the weight of an ox the average of all of the guesses people had submitted (787 in all) was almost exactly right: 1,197 pounds vs. the actual weight of 1,198 pounds, a degree of accuracy that no individual could attain on his own....
The implication is that the bigger the crowd, the greater the accuracy....The idea has a particular resonance at a time when online businesses from Amazon.com to Yelp rely on aggregated user reviews, and social networks such as Facebook sell ads that rely in part on showing you how many of your friends "like" something.
Couzin and Kao are not the first academics to challenge the principles of crowd wisdom. They're doing it from a biological perspective. Previously, Sinan Aral, the David Austin Professor of Management and an associate professor of information technology and marketing at the MIT Sloan School of Management, challenged the merits of user reviews in a fascinating article called "The Problem With Online Ratings."
In the article, Aral points out that online ratings are prone to herding: They are often disproportionately positive, because in many cases reviewers tend to pile on each other's glowing ratings.
"The distributions of product ratings on Amazon.com include far more extreme positive (five-star) than negative (one-star or two-star) or generally positive (three-star or four-star) reviews," he writes. "Trends toward positivity have also been observed in restaurant ratings and movie and book reviews on a variety of different websites."
In short, when it comes to online ratings, Aral argues, the crowds are not really providing an accurate assessment of the book, restaurant, or movie that they are rating. There's positive inflation.
The 4 Rules of Wise Crowds
As with most concepts that become popular, there's a gap between the actual concept and what most people believe is the concept. (Sometimes a decision is just a decision, not a "tipping point" or a "paradigm shift.")
Therefore, it's important to point out that Surowiecki's conception of crowd wisdom does not actually apply to the province of online ratings, or any group decision wherein the individuals would be influenced by what other individuals in the group are thinking.
"Wise crowds," notes the Publishers Weekly summary of Surowiecki's book, have four qualities:
(1) diversity of opinion;
(2) independence of members from one another;
(4) a good method for aggregating opinions.
Most forms of online ratings systems--be it Amazon.com ratings, Yelp ratings, or Facebook likes--do not adhere to the second of those four qualities.
Yes, technically, every member is independent to give that book, movie, or restaurant whatever rating she likes. But as Aral convincingly argues, there's something about seeing a preponderance of positive reviews or "likes" that robs most site visitors of true independence. The herding effect takes over. It is entirely different from casting a blind ballot, the way you might if you were guessing an ox's weight or how many jellybeans were in a jar.
According to Aral, the takeaways here for leaders include: (1) Taking positive online ratings with a grain of salt; (2) Taking advantage of the herding effects, for your own products: "Systematic policies to encourage satisfied consumers to rate early on could change the minds of future consumers to feel more positively toward the products or services they are rating."
'2 Pizza Rule' for Crowds
There's one more implication of Couzin and Kao's finding that "there's a small optimal group size of eight to 12 individuals that tends to optimize decisions."
On some level, it's a validation of Jeff Bezos' well-known rule "two pizza" rule for the ideal team size, which states: "If it takes more than two pizzas to feed them, the team is too big."
Bezos is not the only leader who has championed small team sizes. As Sarah Miller Caldicott points out in Midnight Lunch, her book about Thomas Edison's theories of collaboration, there's a parallel between Bezos' beliefs and those of Ricardo Semler, CEO of Brazilian manufacturer Semco SA and author of The Seven-Day Weekend. Semler suggests that six-to-ten is the ideal team size. "Our units are always a size that permits people to know each other," he writes.
Why are these smaller teams more productive? Wharton management professor Jennifer Mueller agrees with Semler. She notes that "there are costs to collaborating. In larger teams, one of those costs is that people may not have the time and energy to form relationships that really help their ability to be productive."
In conclusion: Crowds have their place. But there's a growing body of academic literature supporting the decision-making power of small groups.