War Stories

Number Crunching

Taking another look at the Lancet’s Iraqstudy.

Let us take another look at the Lancet study. This is the report, issued by a team from Johns Hopkins University and published in the current issue of British medical journal the Lancet, estimating that 655,000 Iraqis have died as a consequence of the U.S.-led invasion. It’s a shocking number. Is it true?

Initially, I decided to stay out of this controversy. I’d written the first critique of an earlier Lancet/Hopkins study, which estimated that 100,000 Iraqis had died in just the first year of the war. The study’s sample was too small, the data-gathering too slipshod, the range of uncertainty so wide as to render the estimate useless.

The new study looked better: a larger sample, more fastidious attention to data-gathering procedures, a narrower range of uncertainty. The number—655,000 deaths—seemed improbably high (that’s an average of 20,000 deaths a month since the war began), but so have a lot of savage statistics that turned out to be true; and there are many areas of Iraq these days where reporters and human rights groups dare not roam.

However, the more I read the study and the more I talked with statisticians, the flimsier this number appeared. The study might be as good an effort as anyone can manage in wartime. Certainly, the Iraqis who went door to door conducting the surveys are amazingly brave souls. But the study has two major flaws—the upshot of which is that it’s impossible to infer anything meaningful from it, except that a lot of Iraqis have died and the number is getting higher.

This point should be emphasized. Let’s say that the study is way off, off by a factor of 10 or five—in other words, that the right number isn’t 655,000 but something between 65,500 and 131,000. That is still a ghastly number—a number that, apart from all other considerations, renders this war a monumental mistake. Here’s the key question: Had it been known ahead of time that invading Iraq would result in the deaths of 100,000 Iraqis (or 50,000, or pick your own threshold number), would the president have made—would Congress have voted to authorize, would any editorial writer or public figure have endorsed—a decision to go to war?

Here lies the danger of studies that overstate a war’s death toll. The war’s supporters and apologists latch on to the inevitable debunkings and proclaim that really “only 100,000” or “only 200,000” people have died. It’s obscene—it sullies and coarsens the political culture—to place the word “only” in front of such numbers.

So, let’s look at this study’s numbers and why they’re almost certainly overstated.

The researchers reached this conclusion through a common technique known as “cluster sampling.” They randomly selected 47 neighborhoods in 18 of Iraq’s regions. Within those neighborhoods, they visited a total of 1,849 households, comprising 12,801 residents, and asked how many of their members had died before the invasion and since the invasion. The researchers then extrapolated from this sample to the entire Iraqi population of 27 million people—from which they concluded that since the war there have been about 655,000 “excess deaths,” of which 601,000 were caused by violence.

This methodology is entirely proper if the sample was truly representative of the entire population—i.e., as long as those households were really randomly selected. If they were not randomly selected—if some bias crept into the sampling, even unintentionally—then it is improper, and wildly misleading, to extrapolate the findings to the population as a whole.

There are two reasons to suspect that the sample was not random, and one of those reasons suggests that the sample was biased in a way that exaggerates the death toll.

First, the Lancet study, like all such studies, estimates not how many people have died, but rather the difference between how many people died in a comparable period before the invasion and how many people have died since the invasion. As the study puts it, 655,000 is roughly the number of deaths “above the number that would be expected in a non-conflict situation.”

In any such study, it’s crucial that the base-line number—deaths before the invasion—is correct. The Lancet study’s base-line number is dubious.

Based on the household surveys, the report estimates that, just before the war, Iraq’s mortality rate was 5.5 per 1,000. (That is, for every 1,000 people, 5.5 die each year.) The results also show that, in the three and a half years since the war began, this rate has shot up to 13.3 per 1,000. So, the “excess deaths” amount to 7.8 (13.3 minus 5.5) per 1,000. They extrapolate from this figure to reach their estimate of 655,000 deaths.

However, according to data from the United Nations, based on surveys taken at the time, Iraq’s preinvasion mortality rate was 10 per 1,000. The difference between 13.3 and 10.0 is only 3.3, less than half of 7.8.

Does that mean that the post-invasion death toll is less than half of 655,000? Not necessarily. You can’t just take the data from one survey and plug them into another survey. Maybe the Hopkins survey understated post-invasion deaths as much as it understated preinvasion deaths—in which case, the net effect is nil. Maybe not. Either way, it should have been clear to the data-crunchers that something was wrong with the numbers for preinvasion and post-invasion deaths, since they were derived from the same survey.

“When you get these large discrepancies between your own results and results that are already well-established, you recrunch your numbers or you send your survey team back into the field to widen your sample,” Beth Osborne Daponte, a demographer at Yale University who has worked on many studies of this sort, told me in a phone interview. “Obviously, they couldn’t do that here. It’s too dangerous. But that doesn’t change the point. You need to triangulate your data”—to make sure they match other data, or, if they don’t, to figure out why. “They didn’t do that.”

(If the Hopkins researchers want to claim that their estimate is more reliable than the United Nations’, they will have to prove the point. It is also noteworthy that, if Iraq’s preinvasion mortality rate really was 5.5 per 1,000, it was lower than that of almost every country in the Middle East, and many countries in Western Europe.)

This flaw—or discrepancy—doesn’t tell you whether 655,000 is too high, too low, or (serendipitously) just right. It just tells you that something about the number is almost certainly off.

However, the second flaw suggests that the number is almost certainly too high.

A joint research team led by physicists Sean Gourley and Neil Johnson of Oxford University and economist Michael Spagat at Royal Holloway University in London noticed the second flaw. In a statement released Thursday (and reported in today’s issue of the journal Science), they charged that the Lancet study is “fundamentally flawed”—and in a way that systematically overstates the death toll.

The Lancet study, in its section on methodology, notes that the teams picked the houses they would survey from a “random selection of main streets,” defined as “major commercial streets and avenues.” (Italics added.) They also chose from a “list of residential streets crossing” those main streets.

The Oxford-Holloway team calls this method “main street bias.” They add:

Main street bias inflates casualty rates since conflict events such as car bombs, drive-by shootings, artillery strikes on insurgent positions, and marketplace explosions gravitate toward the same neighborhood types that the [Lancet] researchers surveyed. …In short, the closer you are to a main road, the more likely you are to die in violent activity. So if researchers only count people living close to a main road, then it comes as no surprise they will over-count the dead.

Whether or not the Hopkins researchers were aware of this flaw, or its importance, is unclear. An exchange of e-mails with Gilbert Burnham, the study’s chief researcher, raises some disturbing questions about this matter. (Click here for the details.)

It’s understandable why the surveyors limited their work to the main roads; they were in strange and dangerous places. But that doesn’t negate the Oxford-Holloway team’s point. By this measure alone, the Lancet study is not a random survey. In statistically proper random surveys, each household has the same probability of being chosen. Yet in the Lancet survey, if a household wasn’t on or near a main road, it had zero chance of being chosen. And “cluster samples” cannot be seen as representative of the entire population unless they are chosen randomly.

The Iraq war is a catastrophe in political, military, and—not least—human terms. How much so may be unfathomable as long as the streets of Iraq are still dangerous. In any event, it’s a question that the Lancet study doesn’t really answer.