Science

Screen Test

Why we should start measuring bias.

“Everyone’s a little bit racist sometimes,” proclaims the Broadway musical Avenue Q. “Doesn’t mean we go/ Around committing hate crimes/ Look around and you will find/ No one’s really colorblind/ Maybe it’s a fact/ We all should face/ Everyone makes judgments/ Based on race.”

How do you test internal bias? You can try asking people, but since most of us don’t like to think of ourselves as biased, we won’t necessarily admit to it on a questionnaire, even anonymously. But there’s a test to detect the kind of bias people won’t admit to and may not even be aware of themselves—a test that works. The psychologists who devised it, however, are squeamish about real-world uses of it. They shouldn’t be. Though it shouldn’t be used as the basis for hiring decisions, the test has its place.

In 2003, Mahzarin Banaji, Anthony G. Greenwald, and Brian Nosek published a paper detailing an experimental methodology they had developed called the Implicit Association Test, or IAT. Rather than asking subjects what they thought about different races (or what they thought they thought), Banaji and her colleagues decided to time them as they paired words and images.

In the test’s most popular version, the Race IAT, subjects are shown a computer screen and asked to match positive words (love, wonderful, peace) or negative words (evil, horrible, failure) with faces of African-Americans or whites. Their responses are timed. If you tend to associate African-Americans with “bad” concepts, it will take you longer to group black faces with “good” concepts because you perceive them as incompatible. If you’re consistently quicker at connecting positive words with whites and slower at connecting positive words with blacks—or quicker at connecting negative words with blacks and slower at connecting negative words with whites—you have an implicit bias for white faces over those of African-Americans. In other words, the time it takes you to pair the faces and words yields an empirical measure of your attitudes. (Click here for a more detailed description of the test.)

The elegance of Banaji’s test is that it doesn’t let you lie. What’s being measured is merely the speed of each response. You might hate the idea of having a bias against African-Americans, but if it takes you significantly longer to group black faces with good concepts, there’s no way you can hide it. You can’t pretend to connect words and images faster any more than a sprinter can pretend to run faster. And you won’t significantly change your score if you deliberately try to slow down your white = good and black = bad pairings.

Banaji, now a social psychologist at Harvard, has found that 88 percent of the white subjects who take her test show some bias against blacks. The majority of all subjects also test anti-gay, anti-elderly, and anti-Arab Muslim. Many people also exhibit bias against their own group: About half of blacks test anti-black; 36 percent of Arab Muslims test anti-Arab Muslim; and 38 percent of gays show an automatic preference for heterosexuals. (You can take the test yourself here; the results can be deeply humbling.)

The IAT, then, is an objective measure of bias. And research has shown that the test is powerfully predictive of behavior—as Banaji notes in refuting critics’ claims that the test measures not individual bias but awareness of bias within society. People with high racial bias scores are more likely to choose a white partner to work with and more willing to cut funding for minority student groups. They’re also more likely to judge minority suspects guilty in ambiguous situations and assign longer prison sentences to suspects with minority names.

Yet the test’s creators are extremely wary about unleashing the powerful tool they’ve created. Banaji has threatened to testify in court against efforts to use her test in real-world situations. Using the test to ferret out biased people, she argues, assumes that people who have high implicit bias scores will always behave in a biased way—which is not the case, since the tests don’t predict behavior with 100 percent accuracy. Banaji also points out that some highly motivated subjects may be able to beat the test by focusing on “counter-stereotypes,” for instance, by thinking about black heroes like Dr. Martin Luther King Jr. and Nelson Mandela just before taking the test.

Banaji is right: The test isn’t a perfect predictor, and it may be possible to beat it. Those are good reasons to limit the test’s uses. But they don’t justify never using it at all.

Consider juries. Since studies show that people with high bias scores judge minorities guiltier than whites, people who test as highly biased against minorities shouldn’t serve on juries in cases involving minority defendants. It’s standard for judges to strike prospective jurors who exhibit clear prejudice against a defendant; at the federal corruption trial of former Atlanta Mayor Bill Campbell, one prospective juror was recently dismissed for writing in the questionnaire that he thought Campbell, who is African-American, should be “hung from the highest tree.” Other jurors, however, don’t volunteer their bias on questionnaires. Banaji’s test would tell us who they are. Sure, not everyone who tests high for bias will actually judge the case before them in a biased way. But given the high stakes for the defendant—and the relatively low ones for a prospective juror—isn’t it better to err on the side of keeping biased people out of the jury box?

A thornier question, though, is whether employers should use Banaji’s test. Here the stakes are high on both sides. In a lot of jobs—judges, police officers, welfare officers, hiring managers, and others as well—biased people can do real harm. On the other hand, if a test shows an applicant is biased, but you have no evidence that he has actually discriminated against anyone, would it really be fair not to hire him? This is where the distinction between implicit bias and actual discrimination becomes most important. Since the test is not perfectly predictive of actual behavior, the risk of a false positive here is real. If you screen somebody out of a job who would not have actually behaved in a discriminatory manner, you’ve done them wrong.

Using the implicit bias test for employment screening, then, goes too far (and it’s easy to imagine the legal challenges). But employers should be able to use the test to assess employees once they’ve been hired. Ideally, an employee’s individual result would be revealed only to him or her (employers could get aggregate reports so they could better make decisions about how to reduce bias in the workplace). One reason to encourage employers to give the test is that, as Berkeley psychologist Jack Glaser points out, just taking it may sometimes be enough to convince people they are prejudiced and should try to change. It’s called “unconsciousness raising”—if you know what your unconscious is doing, you may work to override it.