What John Tierney Gets Wrong About Women Scientists
Understanding a new study about discrimination.
In a much e-mailed New York Times article last week on liberal bias among psychologists, science columnist John Tierney suggested that a "taboo on discussing sex differences" has prevented frank discourse about the real reason why the ratio of male to female scientists is so skewed. He went on to cite a new paper by Stephen Ceci and Wendy Williams in the Proceedings of the National Academy of Sciences that, he claimed, contradicts the "assumption that female scientists [face] discrimination and various forms of unconscious bias." But, in fact, the paper's authors make a narrower argument, and some of the evidence they present suggests that female scientists almost certainly do face discrimination and various forms of unconscious bias.
Here's what Ceci and Williams show: That women with the same resources as men are just as likely to get their papers, grants, and job applications accepted. While this might appear to mean that women scientists don't face discrimination, in fact, it's quite compatible with the strong experimental evidence that there is bias against women.
In order to understand why, we need to revisit some basic facts about the scientific method. The best scientific way to discover if one factor influences another is to do a controlled experiment. For example, you can give people two identical résumés to evaluate, one with a woman's name and one with a man's name. If people rank the one with man's name higher than the identical one with a woman's name, you know that they are discriminating on the basis of sex, and nothing else, since you've experimentally controlled all the other factors. These experiments, and others like them, have been done. They are described in the PNAS article and the results are clear. Even in fields that are traditionally considered friendly to women, such as psychology and sociology, a woman's name leads to a lower ranking. As Ceci and Williams say, it is extremely unlikely that this bias is limited to the specific fields that were studied in these experiments. If you want to answer the scientific question of whether there is unconscious bias and discrimination against women, these experimental studies are the gold standard.
But there is another, trickier question to ask. How does this kind of discrimination actually influence the success of women scientists? That's much harder to determine, because you can't experimentally control all the other factors that shape a person's career. Instead of doing an experiment, the best you can do is to analyze the correlations between different factors, and that's much more problematic.
Ceci and Williams try to answer this question by analyzing the correlational data, and they come to an interesting and important conclusion. You might think that the bias that shows up in the résumé experiments would also show up directly in the correlational data—that journals, granting agencies, and academic departments would simply reject women at higher rates, and that this would lead women to be less successful. Ceci and Williams show that while this may have been true in the past, nowadays the relationship among gender, bias, and success is more complicated and indirect. In particular, they argue that women fail today primarily because of the resources that are available to them and the choices they make (or are led or forced to make) early in their careers, rather than because of the way they are judged later on.
Correlational analyses are tricky, however. To start out, you might ask whether there is a correlation between sex and scientific success. In fact, there is: Overall, women are less likely to be successful scientists than men.
But the difficulty, as every first year statistics course will tell you, is that correlation does not imply causation. Nicotine-stained fingers are correlated with lung cancer—people with yellow fingers are more likely to have cancer—but yellow fingers don't actually cause cancer. Also, just because you don't find a correlation between two factors, you can't conclude that there is no causal relation between them—it's possible that two causal factors cancel each other out. For example, you might fail to find a correlation between cholesterol and atherosclerosis because you lumped together two different kinds of cholesterol, LDL, which increases the problem, and HDL which decreases it.
One approach to these problems is to try to untangle confounding causes using various statistical methods. But this approach is also complicated. For example, suppose you discover that there is a correlation between poverty and ill health, but this correlation disappears when you factor in health care and nutrition. The few poor people with high-quality health care and nutrition are as healthy as rich people—it's just that hardly any poor people have these advantages. It would be wrong to conclude from this that poverty has no causal influence on health. The right conclusion would be that poverty causes bad health care and poor nutrition, which cause ill health.
Photograph by Stockbyte/Thinkstock.