How Obama’s Embrace of Empiricism Could Swing the 2012 Race

The new science of winning campaigns.
May 22 2012 12:37 PM

The Death of the Hunch

Campaigns used to guess which ads were most effective. Now they can prove it. How Obama’s embrace of empiricism could swing the 2012 race.

(Continued from Page 2)

Experimenters then set out to identify the attributes that distinguished voters who had been moved by the negative message toward supporting Carnahan. Using census data, they learned that almost all the movement had come from voters in neighborhoods in the third socioeconomic quartile. Women living in precincts with an average annual household income between $37,500 and $45,000 had increased their support for Carnahan by over 10 percent, while the other three quartiles moved barely at all. Other predictive characteristics were revealed as well: Those in areas with the densest populations moved most, as did those in the precincts with the highest concentration of single parents.

By Sept. 5, EMILY’s List not only had the confidence to know that its negative mail would have more impact than the comparative material, but could begin trawling through a Missouri voter file to pluck the targets most likely to be persuaded by it: women in upper-middle class towns crowded with single parents.

***

Advertisement

Four years ago, the Obama campaign used experimental methods to test nearly all of its online communications, randomizing the design of Web pages, the timing of text message blasts, and the language of email solicitations to measure their relative effectiveness. (Dan Siroker, who worked online analytics for Obama in 2008 and now counts the re-election campaign as a client of his company Optimizely, describes the process known as A/B testing here.)

But that ethic never fully translated offline, where effects are much harder to measure than tallying clicks. During the summer of 2008, Obama advisers had casual interactions with Analyst Institute officials and ultimately integrated many of the group’s best practices for get-out-the-vote tactics. The campaign briefly considered including an experimental component into its otherwise robust data efforts, but the compressed period between the primaries and the general election offered little time to upend a national communications strategy for the sake of testing.

This campaign is a different story. The experimental ethic was embraced by campaign leadership at the outset of the re-election effort. The formal arrangement with the Analyst Institute, which appears (according to federal filings) to cover a $22,000 monthly retainer, marks the group’s most significant engagement ever with a candidate’s campaign. An institute analyst is now based at the Chicago headquarters.

The Obama campaign’s long reach and big budget should significantly expand the frontiers of experimental politics, which have been limited by a tax code that prevents academic and nonprofit researchers from disseminating partisan messages. A presidential campaign faces no such restriction, and political operatives familiar with testing methods believe it should be possible to randomize Obama’s messages not only by household (as in the EMILY’s List test) but by larger political units—like media markets or cable systems—to track the effects of mass media. (Rick Perry’s 2006 gubernatorial re-election randomized its broadcast buys over a three-week period, but the goal of the project was to test the impact of advertising at different levels and not the effectiveness of specific messages.)

Plenty of instinct and art remain in the Obama campaign’s approach to message development. The early stages of the process resemble the traditional model, with media strategists relying on massive amounts of conventional polling from outside firms to track the electorate’s mood and campaign dynamics, and on focus groups to add impressionistic texture and a venue to audition specific images and language. The ads and direct-mail brochures that emerge from this process can then be assigned to small groups of voters under experimental conditions, pitted against one another in various combinations and across different audiences.

That full testing cycle can take around two weeks. In the case of mail, that includes the time it takes to design, print, and mail a piece—and a window for polling before and after to see what impact it had on opinions. Then analysts can model the attributes of those who were moved by the mail. Is an ad about the auto bailout more likely to persuade upscale or downscale voters? Did younger voters respond differently than older ones to information about particular provisions of the health-care bill? Are attacks on Romney’s Bain record more salient with those leaning toward Obama or those leaning toward Romney?

Before making strategic adjustments based on the experimental findings, however, analysts have to consider whether the differences they find among voters really reflect the workings of the campaign’s messages and not just statistical noise. “The key issue when dealing with subgroup analysis is it gets very easy to keep looking until you find something—what statisticians call ‘data-dredging,’ ” says Feller. “I could go through each variable: Do women respond differently than men? Do 85-year-old people respond differently than 75-year-olds? Do cat owners respond differently than dog owners?”

There is still, then, room at Obama’s Chicago headquarters for old-fashioned political intuition. What looks like a spring of experimentation will soon give way to a summer of analysis and strategic adjustments. Statisticians will find patterns, and political hands—relying in part on findings from other, more traditional methods—will discern whether those patterns can be exploited, and perhaps test them again. By fall, the hypotheses will outnumber the hunches.