The World

The Problem with Death Tolls 

Pakistani office workers speak on their mobile phones on the street after an earthquake in Karachi on September 24, 2013. 

Photo by RIZWAN TABASSUM/AFP/Getty Images

Pakistani authorities say at least 45 people were killed by a 7.8 magnitude earthquake that struck the country’s southwestern Baluchistan province today. As my colleague Josh Voorhees writes, that number is likely to rise, and reporting by Gul Yusufzai of Reuters suggests the damage caused by the quake, which was so powerful it created a new island off the country’s southern coast and was felt as far away as New Delhi, is likely to be extensive:

Officials said scores of mud houses were destroyed by aftershocks in the thinly populated mountainous area near the quake epicenter in Baluchistan, a huge barren province of deserts and rugged mountains.

Abdul Qadoos, deputy speaker of the Baluchistan assembly, told Reuters that at least 30 percent of houses in the impoverished Awaran district had caved in.

The quake is a grim reminder that while reporting on disasters like these is often dominated by death tolls, which are used as a kind of shorthand to diagnose the severity of an event. Putting aside the inherent problems involved with calculating death tolls and the geographical biases involved in disaster coverage, the number of dead can be a misleading way to think about the scale of a disaster like this one, particularly when it comes to decisions about funding for relief efforts. In a crowded international news environment, we may decide which disasters to pay attention to based on how many people were killed, but that’s not the best way to decide how much help is needed.

Ioannis Evangelidis and Bram Van den Bergh of Erasmus University discuss this problem in a recent paper for Psychological Science:

In December 2003, the Bam earthquake in Iran killed 26,796 people and affected 267,628 more. Private individuals responded by donating $10.7 million. In January 2000, the Yunnan earthquake in China killed 7 people and affected 1.8 million more. Donors contributed only $94,586. These observations suggest that donors may be sensitive to the number of fatalities and much less so to the number of persons affected. 

Looking at relief data from 381 disasters between 2000 and 2011, they found that “more than $9,000 was donated for each additional person killed in a disaster” but there was no significant correlation between the number of people affected and the donation amount.

In several independent studies, they also found that volunteers were likely to suggest more money should be donated to hypothetical disasters with higher casualties, rather than higher numbers of people in need. (Caveat: The study relies partially on test subjects recruited via Amazon’s Mechanical Turk, a method some scholars find problematic.)

There is some good news in the study. The authors found that the subjects were willing to increase the amount given to the disaster when the numbers became more specific, i.e. 4,000 people “left homeless” as opposed to “affected” or “in need,” suggesting that donors may consider such numbers more reliable.

But overall, the research suggests that biases and misleading heuristics on the part of donors could be keeping money from reaching the people who need it. (I suspect a similar dynamic may come into play for the allocation of news coverage.)

Obviously this is just the tip of the iceberg when it comes to inefficiencies in disaster relief, but it’s still important that while death may be shocking, it’s the living who need help.