The Book Club

The Virtues of Group Decision-Making

Malcolm,

Actually, I don’t think that I oversold the virtues of distributed decision-making. (Of course, I would say that.) To be honest, I don’t know enough about the details of mammography to be sure of all the potential pitfalls of the plan to outsource the reading of X-rays to China. But assuming that there is (at least in theory) an objective answer to the question “Does this mammogram show evidence of cancerous or precancerous cells?” then I have little doubt that if you averaged the judgments of a group of moderately trained people, you would end up with results as good as those produced by a board-certified radiologist. (The group wouldn’t have to be as big as 1,000 people, either.) NASA, for instance, recently ran an online experiment called “clickworkers” to test whether the collective judgment of ordinary people would be of any use finding and classifying craters on Mars. You could go to the site, get trained (for a couple of hours, I think), and then click away. The result, in NASA’s words: “the automatically computed consensus of a large number of clickworkers is virtually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters.” And these people weren’t even being paid.

Why does this work? The key is that even though each person in the group is making mistakes (often, lots of them), as long as the group is large enough and diverse enough, the errors people make effectively cancel themselves out. And what remains, remarkably often, is the information you’re looking for. This sounds—at least to some people—implausible, or pseudo-mystical, or both. But, as you mention, in my book there are myriad examples of this phenomenon at work, solving problems from the simple (guessing the number of jellybeans in a jar) to the mindbogglingly complex (finding a lost submarine on the basis of a few fragments of information).

You’re right, though, that “nonexperts groups” aren’t always better than expert individuals. In the first place, in some cases—like situations where decisions need to be made in a matter of seconds—collective decision-making is impractical. Other situations—like flying a plane or performing surgery—seem to be tailored pretty well to individuals. More important, there are problems where you need to know a lot just to understand the question you’re trying to answer. In those cases, relying on a group of laypeople may be futile.

What’s important, though, is that even in those situations where expert knowledge seems necessary, you’re better off relying on the judgment of a group of experts rather than a single expert, no matter how brilliant. The truth is that I don’t really think of The Wisdom of Crowds as a defense of laypeople against experts. (As I say in the book, I always assume that in most cases, “crowds” will include experts as well as amateurs.) I think of it as a defense of collective decision-making against our excessive faith in the single, individual decision-maker. Your book, for instance, opens with the story of a Greek kouros that the Getty Museum thought was real, but that in truth was a clever fake. If I were trying to decide whether or not the statue was real, I would put far more trust in the collective judgment of the many experts who came to see the statue rather than in the judgment of any one of them, even someone like Thomas Hoving, the former director of the Metropolitan Museumof Art, who’s one of the heroes of your book.

I think there are two big problems with relying on a single individual—and those problems exist whether that individual uses rapid cognition or a more deliberate style. The first is that true experts—that is, the real titans—are surprisingly hard to identify. Past performance obviously provides some clue, but you need a very long track record to be sure that someone’s performance is really the result of genuine superiority rather than chance. As Nassim Taleb puts it, it’s easy to be fooled by randomness. Paul van Riper certainly sounds brilliant, but the Millennium Challenge is a sample of one.

The second, and more important, problem is that even brilliant experts have biases and blind spots, and so they make mistakes. And what’s troubling is that, in general, they don’t know when they’re doing it. You suggest in Blink that experts have a better sense than laypeople of the unconscious processes that underlie their instantaneous reactions, and that sounds right to me. But study after study has shown that expert judgments are very poorly calibrated—which means that there’s little correlation between an expert’s confidence in his judgment and the accuracy of it. (The two great exceptions are weathermen and bridge players.) In other words, experts don’t know when they don’t know something.

To me, that’s one of the (and maybe the) great virtues of collective decision-making: It doesn’t matter when an individual makes a mistake. As long as the group is diverse and independent enough, the errors get corrected and you’re left with the knowledge. And here, oddly enough, Blink and The Wisdom of Crowds intersect quite nicely. A lot of your book, as I said, is about how biases and prejudices and inexperience can lead us astray when we rely on rapid cognition. My book suggests that in lots of cases, if you aggregate those flawed judgments, you can get rid of the flaws and keep the benefits of rapid cognition.

That leads into an issue that Blink wrestles with but that I’d like to hear a little more about: All through Blink, there are examples of very smart people relying on their instincts and making very bad judgments. Can we know in advance when that’s going to happen—when an expert’s intuitions are to be trusted and when they’re likely to be faulty? And is rapid cognition as a decision-making mechanism more susceptible to problems of bias, prejudice, etc.? I think one of the reasons why we instinctively (as it were) put more trust in deliberative cognition is that we assume the more time you spend on something, the more likely you are to recognize your mistakes and correct them. (“Always check your work,” we were counseled in math class.) Is this a founding myth of the Standard Model?

Best,
Jim