This is a time-honored technique for many epidemiological studies, but those conducting them have to take great care that the way they select the neighborhoods is truly random (which, as most poll-watchers of any sort know, is difficult under the easiest of circumstances). There's a further complication when studying the results of war, especially a war fought mainly by precision bombs dropped from the air: The damage is not randomly distributed; it's very heavily concentrated in a few areas.
The Johns Hopkins team had to confront this problem. One of the 33 clusters they selected happened to be in Fallujah, one of the most heavily bombed and shelled cities in all Iraq. Was it legitimate to extrapolate from a sample that included such an extreme case? More awkward yet, it turned out, two-thirds of all the violent deaths that the team recorded took place in the Fallujah cluster. They settled the dilemma by issuing two sets of figures—one with Fallujah, the other without. The estimate of 98,000 deaths is the extrapolation from the set that does not include Fallujah. What's the extrapolation for the set that does include Fallujah? They don't exactly say. Fallujah was nearly unique; it's impossible to figure out how to extrapolate from it. A question does arise, though: Is this difficulty a result of some peculiarity about the fighting in Fallujah? Or is it a result of some peculiarity in the survey's methodology?
There were other problems. The survey team simply could not visit some of the randomly chosen clusters; the roads were blocked off, in some cases by coalition checkpoints. So the team picked other, more accessible areas that had received similar amounts of damage. But it's unclear how they made this calculation. In any case, the detour destroyed the survey's randomness; the results are inherently tainted. In other cases, the team didn't find enough people in a cluster to interview, so they expanded the survey to an adjoining cluster. Again, at that point, the survey was no longer random, and so the results are suspect.
Beth Osborne Daponte, senior research scholar at Yale University's Institution for Social and Policy Studies, put the point diplomatically after reading the Lancet article this morning and discussing it with me in a phone conversation: "It attests to the difficulty of doing this sort of survey work during a war. … No one can come up with any credible estimates yet, at least not through the sorts of methods used here."
The study, though, does have a fundamental flaw that has nothing to do with the limits imposed by wartime—and this flaw suggests that, within the study's wide range of possible casualty estimates, the real number tends more toward the lower end of the scale. In order to gauge the risk of death brought on by the war, the researchers first had to measure the risk of death in Iraq before the war. Based on their survey of how many people in the sampled households died before the war, they calculated that the mortality rate in prewar Iraq was 5 deaths per 1,000 people per year. The mortality rate after the war started—not including Fallujah—was 7.9 deaths per 1,000 people per year. In short, the risk of death in Iraq since the war is 58 percent higher (7.9 divided by 5 = 1.58) than it was before the war.
But there are two problems with this calculation. First, Daponte (who has studied Iraqi population figures for many years) questions the finding that prewar mortality was 5 deaths per 1,000. According to quite comprehensive data collected by the United Nations, Iraq's mortality rate from 1980-85 was 8.1 per 1,000. From 1985-90, the years leading up to the 1991 Gulf War, the rate declined to 6.8 per 1,000. After '91, the numbers are murkier, but clearly they went up. Whatever they were in 2002, they were almost certainly higher than 5 per 1,000. In other words, the wartime mortality rate—if it is 7.9 per 1,000—probably does not exceed the peacetime rate by as much as the Johns Hopkins team assumes.
The second problem with the calculation goes back to the problem cited at the top of this article—the margin of error. Here is the relevant passage from the study: "The risk of death is 1.5-fold (1.1 – 2.3) higher after the invasion." Those mysterious numbers in the parentheses mean the authors are 95 percent confident that the risk of death now is between 1.1 and 2.3 times higher than it was before the invasion—in other words, as little as 10 percent higher or as much as 130 percent higher. Again, the math is too vague to be useful.
There is one group out there counting civilian casualties in a way that's tangible, specific, and very useful—a team of mainly British researchers, led by Hamit Dardagan and John Sloboda, called Iraq Body Count. They have kept a running total of civilian deaths, derived entirely from press reports. Their count is triple fact-checked; their database is itemized and fastidiously sourced; and they take great pains to separate civilian from combatant casualties (for instance, last Tuesday, the group released a report estimating that, of the 800 Iraqis killed in last April's siege of Fallujah, 572 to 616 of them were civilians, at least 308 of them women and children).
The IBC estimates that between 14,181 and 16,312 Iraqi civilians have died as a result of the war—about half of them since the battlefield phase of the war ended last May. The group also notes that these figures are probably on the low side, since some deaths must have taken place outside the media's purview.
So, let's call it 15,000 or—allowing for deaths that the press didn't report—20,000 or 25,000, maybe 30,000 Iraqi civilians killed in a pre-emptive war waged (according to the latest rationale) on their behalf. That's a number more solidly rooted in reality than the Hopkins figure—and, given that fact, no less shocking.