“I was surprised at how eerily accurate the crowd’s estimates were,” Severts says.
In his book about smart crowds, Surowiecki cites similar examples of otherwise ordinary people making extraordinary decisions. Take the quiz show Who Wants to Be a Millionaire? Contestants stumped by a question are given the option of telephoning an expert friend for advice or of polling the studio audience, whose votes are averaged by a computer. “Everything we know about intelligence suggests that the smart individual would offer the most help,” Surowiecki writes. “And in fact the ‘experts’ did okay, offering the right answer—under pressure—almost 65 percent of the time. But they paled in comparison to the audiences. Those random crowds of people with nothing better to do on a weekday afternoon than sit in a TV studio picked the right answer 91 percent of the time.”
Although Surowiecki readily admits that such stories by themselves don’t amount to scientific proof, they do raise a good question: If hundreds of bees can make reliable decisions together, why should it be so surprising that groups of people can too? “Most of us, whether as voters or investors or consumers or managers, believe that valuable knowledge is concentrated in a very few hands (or, rather, in a very few heads). We assume that the key to solving problems or making good decisions is finding that one right person who will have the answer,” Surowiecki writes. But often that’s a big mistake. “We should stop hunting and ask the crowd (which, of course, includes the geniuses as well as everyone else) instead. Chances are, it knows.”
Severts was so impressed by his first few efforts to harness collective wisdom at Best Buy that he and his team began experimenting with something called prediction markets, which represent a more sophisticated way of gathering forecasts about company performance from employees. In a prediction market, an employee uses play money to bid on the outcome of a question, such as “Will our first store in China open on time?” A correct bid pays $100, an incorrect bid pays nothing. If the current price of a share in the market for a bid that yes, the store will open on time, is $80, for example, that means the entire group believes there’s an 80 percent chance that that will happen. If an employee is more optimistic, believing there’s a 95 percent chance, he might take the bet, seeing an opportunity to earn $15 per share. In the case of the new store, which had been scheduled to open in Shanghai in December 2006, the prediction market took a dive, falling from $80 a share to $50 eight weeks before the opening date—even though official company forecasts at the time were still positive. In the end, the store opened a month late.
“That first drop was an early warning signal,” Severts says. “Some piece of new information came into the market that caused the traders to radically change their expectations.” What that new information might have been about, Severts never found out. But to him it didn’t really matter. The prediction market had proved its ability to overcome the many barriers to effective communication in a large company. If anyone was listening, the alarm bells were ringing loud and clear.
As this story suggests, there may be several good reasons for companies to pay attention to prediction markets, which are good at pulling together information that may be widely scattered throughout a corporation. For one thing, they’re likely to provide unbiased outlooks. Since bids are placed anonymously, markets may reflect the true opinions of employees, rather than what their bosses want them to say. For another thing, they tend to be relatively accurate, since the incentives for bidders to be correct—from T-shirts to cash prizes—encourage them to get it right, using whatever unique resources they might have.
Above and beyond these factors is the powerful way prediction markets leverage the simple mathematics of diversity of knowledge, which, when applied with a little care, can turn a crowd of otherwise unremarkable individuals into a comparative genius. “If you ask a large enough group of diverse, independent people to make a prediction or estimate a probability, and then average those estimates, the errors each of them makes in coming up with an answer will cancel themselves out,” Surowiecki explains. “Each person’s guess, you might say, has two components: information and error. Subtract the error, and you’re left with the information.”
The house-hunting bees demonstrate this math very clearly. When several scouts return to the swarm from checking out the same perfect tree hollow, for example, they frequently give it different scores—like opinionated judges at an Olympic ice-skating competition. One bee might show great enthusiasm for such a high-quality site, dancing fifty waggle runs for it. Another might dance only thirty runs for it, while a third might dance only ten, even though she, too, approves of the site.
Scouts returning from a less attractive site, meanwhile, like a hole in a stone wall, might be reporting their scores on the swarm cluster at the same time, and they could show just as much variation. Let’s say these three bees dance forty-five runs, twenty-five runs, and five runs, respectively, in support of this mediumquality site. “You might think, gosh, this thing looks like a mess. Why are they doing it this way?” Tom Seeley says. “If you were relying on just one bee reporting on each site, you’d have a real problem, because one of the bees that visited the excellent site danced only ten runs, while one of the bees that visited the medium site did forty-five.” That could easily mislead you.
Fortunately for the bees, their decision-making process, like that of Olympics, doesn’t rely on the opinion of any single individual. Just as the scores given by the international judging committee are averaged after each skater’s performance, so the bees combine their assessments through competitive recruitment. “At the individual level, it looks very noisy, but if you say, well, what’s the total strength of all the bees from the excellent site, then the problem disappears,” Seeley explained. Add the three scores for the tree hollow—fifty, thirty, and ten—and you get a total of ninety waggle runs. Add the scores for the hole in the wall—forty-five, twenty-five, and five—and you get seventy-five runs. That’s a difference of fifteen runs, or 20 percent, between the two sites, which is more than enough for the swarm to choose wisely.
“The analogy is really quite powerful,” Surowiecki says. “The bees are predicting which nest site will be best, and humans can do the same thing, even in the face of exceptionally complex decisions.”
The key to such calculations, as we saw earlier, is the diversity of knowledge that individuals bring to the table, whether they’re scout bees, astronauts, or members of a corporate board. The more diversity the better—meaning the more strategies for approaching problems, the better; the more sources of information about the likelihood of something taking place, the better. In fact, Scott Page, an economist at the University of Michigan, has demonstrated that, when it comes to groups solving problems or making predictions, being different is every bit as important as being smart.
“Ability and diversity enter the equation equally,” he states in his book, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. “This result is not a political statement but a mathematical one, like the Pythagorean Theorem.”
By diversity, Page means the many differences we each have in the way we approach the world—how we interpret situations and the tools we use to solve problems. Some of these differences come from our education and experience. Others come from our personal identity, such as our gender, age, cultural heritage, or race. But primarily he’s interested in our cognitive diversity—differences in the problem-solving tools we carry around in our heads. When a group is struggling with a difficult problem, it helps if each member brings a different mix of tools to the job. That’s why,