Cross-Pollination

Random Acts

What happens when you approach global poverty as a science experiment?

Randomistas, proponents of randomized controlled trials, have recently been transforming the way we think about economic development and aid to poor countries.

Photo illustration by Lisa Larson-Walker. Photos by Thinkstock, courtesy UNDP/Wikimedia Commons.

If you want to help people start a business, it’s better to give them a loan than a handout. If you want to prevent the spread of HIV and unwanted pregnancies, teach people about safe sex and make condoms widely available. If you want students to learn, put them in smaller classes with new textbooks.

These are all reasonable assumptions based on what we know about economics and human nature. But none stand up—or at least, none are as simple as they seem—when subjected to the kind of randomized controlled trials that have recently been transforming the way we think about economic development and aid to poor countries. Is randomization as revolutionary as its proponents claim? Or is it just another illusory fix to the intractable problem of global poverty?

The idea behind randomized controlled trials should be familiar to any high school science student. In order to test the effect of a variable on a given subject, you need a control group where the variable is not present. In a medical trial, this could mean giving a new drug to one group of patients and a placebo to another. In development, it could mean giving loans and business training to the poorest women in one group of villages in a given area, but not another group, and studying what happens over the next few months. 

Nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting.

Economist Michael Kremer, now the Gates Professor of Developing Societies at Harvard University, carried out one of the earliest randomized development experiments to gain widespread attention, in western Kenya in the early 1990s. Schools in the area had a chronic shortage of textbooks; there was a consensus that more were needed to improve educational outcomes. But when 25 schools out of 100 were randomly chosen to receive new textbooks, they showed little change in average test scores—the only students whose performance did seem to improve were those already at the top of their classes.

The problem likely had to do with language: School is taught in English in Kenya, but for most students it’s a third language behind Swahili and local languages. New textbooks might help those who already speak English well, but for the majority of students, they make little difference without new ways of teaching. Another study by Kremer and Edward Miguel found that providing students in Kenyan schools with deworming medicine not only improved health outcomes but also decreased rates of absenteeism by one quarter. 

The gospel of randomized controlled trials, or RCTs, has spread significantly since Kremer’s early studies. “I’m not suggesting that every organization in every situation should be doing this,” Kremer told Slate. “The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.”

According to Abhijit Banerjee, the Ford Foundation International Professor of Economics at MIT, part of the reason NGOs and governments have been slow to adopt RCTs is practical: They’re usually slow and expensive to carry out, particularly in the midst of scarce resources and clear humanitarian need. The idea of conducting “experiments” on the poor also makes people nervous, for obvious reasons. “When you use the word experiment, people have a vision that these are rats in cages and you’re feeding them some strange chemicals and seeing what happens,” Banerjee told Slate. “That perception has changed.”

“Every development program that’s being done is an experiment,” says Chris Blattman, a political scientist at Columbia University and author of a popular blog on international development. “The funny thing about most aid programs is how nontransparent they are.”

If Kremer was the pioneer of this type of research, Banerjee and his MIT colleague Esther Duflo, co-founders of the Abdul Latif Jameel Poverty Action Lab, have been its most effective evangelists. Their co-authored 2012 book Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty helped introduce the idea of RCTs to the wider public; Duflo has delivered a popular TED talk on the subject and has been profiled by The New Yorker.

Some of the examples in Poor Economics seem perfectly tailored for the Malcolm Gladwell–Freakonomics era of counterintuitive pop social science. One RCT carried out in Kenya by Duflo, Kremer, and Stanford University’s Pascaline Dupas found that providing teenage girls with school uniforms reduced rates of teen pregnancy to a greater extent than sex-ed programs. Experts had assumed that a lack of knowledge was the reason girls were having unprotected sex. But often girls with limited opportunities were choosing to get pregnant; giving them uniforms made it easier for them to stay in school instead.

The RCT scalpel has also targeted some of international development’s sacred cows. Take microfinance: the idea of giving small, low-interest loans to the world’s poorest people—particularly women—as a means of alleviating extreme poverty. Microfinance earned Bangladeshi economist Muhammad Yunus and his Grameen Bank the Nobel Peace Price in 2006. But the randomistas have not been impressed; studies in India, Mongolia, Morocco, and the Philippines show that while such loans can help small business start up, it does little to reduce long-term poverty or to empower women. Similarly, a randomized study from Peru on the much-touted “one laptop per child” program, which gave cheaply produced laptops to schoolchildren in poor countries, did little to help students’ academic performance, as the kids were mostly using them for nonschool activities.   

In 2011, Kenyan economist Bernadette Wanjala conducted a covert evaluation of Millennium Villages, the showcase communities provided with health, agriculture, and educational aid as part of an anti-poverty effort spearheaded by Columbia economist Jeffrey Sachs. When Wanjala and her collaborators took surveys of houses that had and had not been given Millennium Village aid packages, they found that it did increase agricultural yields but also “caused less diversification of household economic activity into profitable non-farm employment, tending to decrease household income.” In other words, the people in the village may have been eating better, but weren’t getting richer.

The randomistas insist that they’re simply adding more rigor to the kind of natural experiments that happen in any intervention. But any time well-funded western research institutions start interfering in the lives of the world’s poor, ethical concerns are certain to follow.

“Creating a controlled environment to answer these questions via a randomized clinical trial may remove the context from the research, leaving us knowing the true answer without knowing what the right answer is,” physician Paul Farmer wrote in 2012. “Stripping away context, both local and translocal, creates the illusion of equipoise in a world riven by poverty and social disparities.” Angus Deaton, a Princeton University development economist who has emerged as one of the most persistent and outspoken RCT critics, argues that it’s almost impossible to hold conditions entirely constant. What’s more, an intervention that works in say, Kenya, could have a completely different impact in Bangladesh or Haiti.

But the problem of generalizability is hardly confined to RCTs. “Let’s say there was an anti-crime program that worked in Philadelphia, and we were trying to decide if we were going to do it in Boston,” Kremer says. “Of course you have to think about the differences between the two programs, and the two social situations. But you have to think about those things whether it’s a randomized evaluation or a before-and-after study.” While no program evaluation is perfect, RCTs at least do more to eliminate potential confounding variables.

Part of the problem is that media and policymakers tend to overstate the conclusions of researchers conducting RCTs. Banerjee, for instance, feels that the case against microcredit can be overstated. “People often read it as saying that microcredit doesn’t work,” he said. “But the microcredit experiments basically show that if I take someone who has no business experience, [a loan] doesn’t help their income. That doesn’t mean that there aren’t experienced businesspeople who could be helped by a loan.”

Randomization may be good at evaluating small interventions at the level of one village, but if it’s really going to be useful in tackling poverty at a macro level, it needs to tell us something about national-level policies—things like taxation, trade policy, and distribution of natural resource profits.

Researchers are now beginning to apply RCTs to some previously untouched areas of public policy. One recent study, for instance, looked at how monitoring can reduce corruption in Indonesian road building projects. Another looked at how local elections impacted governance in Afghanistan. The success of studies like these may go a long way toward determining whether RCTs turn out to have an impact beyond academia.

Randomization has undoubtedly improved scholarship on development work, imbuing economic studies of interventions in the developing world with more methodological rigor. But whether it can actually contribute meaningfully to reductions in global poverty is another question. As the economist Lant Pritchett recently wrote, commemorating the 10th anniversary of the Poverty Action Lab, “The argument that RCTs would be more than a tiny component of the overall process of improving development outcomes seems, even now, 10 years in, at best not provable and at worst not very likely.”

That seems a bit pessimistic, but it’s a challenge the randomistas need to answer. For now, the data’s not quite in yet.