Blink and The Wisdom of Crowds

The Biases and Delusions of Experts
New books dissected over email.
Jan. 11 2005 8:31 AM

Blink and The Wisdom of Crowds

VIEW ALL ENTRIES

Jim,

I cannot tell you how delighted I was to see you use the phrase "The Standard Model" to describe deliberate cognition. For some (largely childish) reason, I love giving ideas dorky labels and then hoping—praying—that someone else picks them up.

But on to your questions. Yes, experts aren't nearly as expert as they think they are. So, can we know beforehand when experts are likely to go astray? And is the Standard Model more bias-proof than Rapid Cognition? Maybe the best way to answer that is to tell a story from my book—the story of what happened to the classical music world when they started using screens in auditions. Prior to the 1980s, auditions for top orchestras were open—that is, the auditioning committee sat and watched one musician after another come in and play in front of the judges. Under this system, the overwhelming number of musicians hired by top orchestras were men—but no one thought much of this. It was simply assumed that men were better musicians. After all, what could be fairer than an open audition? And weren't the members of audition committees, "experts" in their field, capable of discerning good musicians from bad musicians?

But then, for a number of reasons, orchestras in the 1980s started putting up screens in audition rooms, so that the committee could no longer see the person auditioning. And immediately—immediately!—orchestras started hiring women left and right. In fact, since the advent of screens, women have won the majority of auditions for top orchestras, meaning that now, if anything, the auditioning process supports the conclusion that women are better classical musicians than men. Clearly what was happening before was that, in ways no one quite realized, the act of seeing a given musician play was impairing the listener's ability to actually hear what a musician was playing. People's feelings about women, as a group, were interfering with their ability to evaluate music.

I like this example—and I spent a chapter on it in Blink because an audition is a classic example of Rapid Cognition. If you talk to people who sit on audition committees, they'll tell you that they decide whether they like a musician in the first few seconds, sometimes even while the musician is warming up! So, what can we learn from this story? First, as you put it, it's proof that expert judgment is poorly calibrated. Until their biases were demonstrated to them by the screen, there wasn't a maestro in the world who thought his ability to judge someone's musical ability was affected by that person's gender. In fact, I tell a hilarious story in which the maestro of the Munich Philharmonic listened to someone playing the French horn, shouted out "We want him!" and then, when a woman stepped out from behind the screen, nearly had a heart attack.

Point No. 2: Was the extent to which "seeing" biases "hearing" predictable? Actually, yes. I think it's fair to say that any information that is not central to the outcome of a decision represents an inherent bias. I tell numerous stories about this in the book. Rapid Cognition is, by definition, a frugal judgment, and it works best in situations where all extraneous data are removed. It's nice and comforting and familiar to see the person whom you are deciding to hire for a position in an orchestra. But it isn't necessary, is it? We have lots of evidence, similarly, that doctors treat heart disease very differently in white men than in women or African-Americans. Is it necessary to know someone's medical history and blood pressure and see their EKG and check for fluid in their lungs? Absolutely. But it is necessary to know whether the patient is white or black? Not at all. Doctors, in some cases, might be better "experts" in heart disease if they treated their patients behind the equivalent of a screen—if that source of bias (skin color) was simply removed from the equation. I was recently horrified to learn that selective colleges in this country (I grew up in Canada) sometimes require applicants to send in photographs of themselves. Can anyone give me any possible justification for that? I understand that you might want to know if an applicant was a member of an ethnic minority. But you don't need a picture for that. A picture simply opens the door for a whole heap of trouble. Not long ago, I was teaching a freshman seminar at an Ivy League college and looked around the room and everyone was good looking, and I thought to myself: This is what happens when you muddy the waters of supposedly expert decision-making. You are trying to pick the best students. But you end up picking the prettiest best students (which is not the same thing).

In other words, part of what it means to protect judgments against corruption is to insist on at least some discipline in the gathering of information—and if there is anything that characterizes contemporary decision-making it is, I think, an utter lack of discipline. So, to your second point: Is the Standard Model somehow better insulated against this problem? I don't think so, for the simple reason that it's very hard—even within the most formal decision-making structures—to separate out the influence of unconscious bias. All those audition committees in the pre-screen era, for example, made snap judgments about who they liked. But then, when all the applicants had played, they sat down and compared notes and discussed pros and cons, and pored over résumés, and deliberately and formally reviewed the case for each musician—and they still chose only men. My survey of Fortune 500 CEOs, as you mentioned, revealed that, with very few exceptions, they are almost all tall. Are CEOs chosen whimsically? Not at all. Committees spend weeks and months in deliberation. But at the end of the day they still end up overwhelmingly picking tall men. Deliberation makes us more confident in our decision. But I'm not sure it makes the decision itself more accurate and free of bias.

So, I suppose I share with you a general skepticism about the "expert"—in the sense that I think that many of the trappings of expertise are exercises in self-delusion. And moving toward collective decision making—as you propose—is certainly one way of trying to winnow out the delusions. But I guess my final point in this round would simply be that there are good ways of fixing the individual decision-maker as well. We can put up the equivalent of screens. We can find ways of editing out nonessential information.

Best,
Malcolm

James Surowiecki, a former Slate columnist, writes the "Financial Page" column for The New Yorker. Malcolm Gladwell is a writer for The New Yorker.