Obama campaign ads: How the Analyst Institute is helping him hone his message.

How Obama’s Embrace of Empiricism Could Swing the 2012 Race

How Obama’s Embrace of Empiricism Could Swing the 2012 Race

The new science of winning campaigns.
May 22 2012 12:37 PM

The Death of the Hunch

Campaigns used to guess which ads were most effective. Now they can prove it. How Obama’s embrace of empiricism could swing the 2012 race.

(Continued from Page 1)

From those qualitative and quantitative sources, media strategists would develop specific messages. To test them, pollsters would present voters with an argument or piece of information (the deficit has increased under Obama, Romney put a dog on the roof of his car) and ask if it made them “more or less likely” to support the candidate. In some cases, pollsters would ask respondents how they planned to vote, then read them descriptions of the candidates and ask the vote-choice question again to see who moved.

The messages that were most persuasive in polls typically became the stuff of television spots, candidate speeches, online ads, direct-mail pieces, and robocalls. In the heat of a race, campaigns of any significant size would run tracking polls, which allow strategists to spot daily movement they could attribute to campaign activity. But the polls lacked the ability to account for cause and effect. Did the candidate’s numbers move because of her new TV ad about the economy or her new mail piece about abortion—or despite them both?

The Analyst Institute convinced many of the left’s leading institutions that randomized-control trials could be adapted to answer such questions empirically. In March 2008, after John McCain had become his party’s nominee and as Democrats still struggled to pick theirs, the AFL-CIO wanted to determine how to most effectively define the Republican in the eyes of its membership. Working with the Analyst Institute, the AFL’s political department developed three different direct-mail attacks on McCain. One highlighted the senator’s economic-policy agenda and one (called “McBush”) portrayed him as a clone of the unpopular incumbent. A third was presented a testimonial from an old white union electrician and navy veteran who conceded a McCain strength at the outset. “War hero? Absolutely,” the veteran says. “Voice for working families? No way.”


The AFL assigned Ohio union members to one of the three programs, and after mailing them conducted polling interviews with around 1,000 people in each group. Recipients of the “policy” and “McBush” mailers seemed unmoved by the messages they contained: around 38 percent of each universe supported McCain, almost indistinguishable from his support within a control group that had received no contact at all from the AFL. But the “testimonial” left its mark on the Republican candidate: only 32 percent of its recipients said they supported McCain, a drop of 5.6 percent against the control. The AFL made the testimonial a central part of its mail program nationwide.

But the AFL was measuring only the average impact of each message across the entire swath that received it. What if certain types of people were more likely to respond to specific messages than others? Elsewhere, political statisticians had succeeded in developing new methods of disaggregating the electorate so that campaigns could target individual voters instead of entire precincts and media markets or broad demographic categories. Many settled on the statistical models known collectively as microtargeting: algorithms weighing as many as 1,000 different personal variables to generate probabilities predicting whether individual citizens would vote, whom they would support, and their views on specific issues.

But many attending the Analyst Institute’s monthly lunch sessions were bothered by the fact that those models were still built on the spine of traditional polling, which relied on voters to describe how open-minded they would be to new arguments. They wondered whether it would be possible to fuse the real-world empiricism of experiments with the granular profiles made possible by microtargeting. What if campaigns tried out their messages on voters, then used their databases to identify the distinctive characteristics of the people whose minds changed?


In the summer of 2010, the Democratic women’s group EMILY’s List was eager to help state treasurer Robin Carnahan in her run for an open Missouri senate seat, but wasn’t sure what types of arguments they should make on her behalf. The group was interested in communicating with rural independent women in the state, but wasn’t sure what it ought to say. Would it be more effective to present voters with hard-edged attacks on Carnahan’s opponent, Congressman Roy Blunt, or a more balanced account of the candidates’ contrasting positions?

EMILY’s List had been one of the groups involved in the Analyst Institute’s launch, and the two collaborated again on an experiment-informed program to refine its pro-Carnahan tactics. The design was straightforward: EMILY’s List would have its consultant prepare two different direct-mail flights of four pieces each. One would be comparative (“Here’s where Congressman Roy Blunt and Robin Carnahan stand on working families”) and the other purely negative focusing on the Republican’s known vulnerabilities (“Blunt has proven he’s not on our side” and “Blunt doesn’t know the difference between lobbying and legislating”). A sample of rural independent women voters would be randomly selected, and assigned to receive one of the two flights of mail.

On Aug. 31, after all four pieces of mail had been delivered, the Analyst Institute commissioned polling interviews with 5,912 voters in the state. Among those who had received the negative mail, 38.3 percent supported Carnahan—one point ahead of those who received the comparative message and three points ahead of a control group that received no mail at all from EMILY’s List.