Two weeks ago, top Obama campaign advisers Jim Messina and David Axelrod announced a $25 million national television buy, a figure rightfully acknowledged with a sense of wonder, given that there were still six months to go before Election Day. But anyone waiting for coast-to-coast shock-and awe must be disappointed. The ads have rolled out at a desultory trickle: a nine-state buy for a 60-second overview of Obama’s first-term successes; a Spanish-language health-care ad running in Florida and another in English about higher-education costs appearing there and in Nevada; and a long ad about Bain Capital that reportedly cost less than $100,000 to place in markets across five states. In other words, the Obama team has broken nearly every piece of received wisdom that media consultants like to offer about the intensity and duration necessary for television ads to be successful in the modern era.
But scattered, unsustained messaging has become the unlikely hallmark of the well-funded Chicago campaign. The strategy was put into play even before Romney emerged as the Republican nominee. There was the late-November advertising run on satellite systems that the campaign called “tiny,” and then silence until a brief January broadcast-buy across six states focusing on energy, ethics, and the Koch brothers. An isolated flight of brochures about health-care legislation hit mailboxes in March, timed to Supreme Court arguments on the subject. In voluminous (if not easily audited by outsiders) online ads and targeted email blasts, the campaign has addressed seemingly every topic or theme imaginable: taxes paid by oil companies, the “war on women,” and a variety of local issues of interest in battleground states.
If these forays seem random, it’s because at least some of them almost certainly are. To those familiar with the campaign’s operations, such irregular efforts at paid communication are indicators of an experimental revolution underway at Obama’s Chicago headquarters. They reflect a commitment to using randomized trials, the result of a flowering partnership between Obama’s team and the Analyst Institute, a secret society of Democratic researchers committed to the practice, according to several people with knowledge of the arrangement. (Through a spokeswoman, Analyst Institute officials declined to comment on the group’s work with Obama and referred all questions to the campaign’s press office, which did not respond to an inquiry on the subject.)
The Obama campaign’s “experiment-informed programs”—known as EIP in the lefty tactical circles where they’ve become the vogue in recent years—are designed to track the impact of campaign messages as voters process them in the real world, instead of relying solely on artificial environments like focus groups and surveys. The method combines the two most exciting developments in electioneering practice over the last decade: the use of randomized, controlled experiments able to isolate cause and effect in political activity and the microtargeting statistical models that can calculate the probability a voter will hold a particular view based on hundreds of variables.
Obama’s campaign has already begun rolling out messages to small test audiences. Analysts then rely on an extensive, ongoing microtargeting operation to discern which slivers of the electorate are most responsive, and to which messages. This cycle of trial and error offers empirically minded electioneers an upgrade over the current régime of approaching voters based on hunches.
“In the first experiment you probably have no idea,” says Avi Feller, a Harvard graduate student and former Obama White House aide who has written about political experiments. “But by the 20th randomized trial you can start to say ‘we’ve seen this group be more responsive.’ You can start to do better than just wild guesses.”
The Analyst Institute was formed in 2007 to organize an expanding research portfolio produced by liberal consultants and institutions that were adopting techniques from medicine and the social sciences to better run campaigns. Many of the group’s early experiments focused on voter turnout, often tracking the impact of motivational techniques that were informed by behavioral psychology. Experimenters would randomly assign voters to different get-out-the-vote treatments and measure after an election whether one group turned out at a higher rate than the other. This was relatively straightforward and inexpensive—whether someone votes can be tracked on publicly available electoral rolls—and required only a campaign or institution willing to hold out a control sample for tests. But even this was too demanding a burden for many political players: While institutions like the AFL-CIO and Rock the Vote signed up, candidates were typically unwilling to make such a commitment for research that wouldn’t yield insights until after the election.
While turnout experiments were good for isolating whether an individual phone call, door knock, or piece of mail could mobilize citizens, they couldn’t track how voters chose between candidates. For that, campaigns continued to rely on many of the same techniques they had used for measuring public opinion for a half-century. They would look at the issues that self-described “undecided” voters said mattered most to them, or what those people saw as the strengths and weaknesses of each of the candidates. Researchers would typically gather small samples of those voters for focus groups, to get a feel for the language and images that they would respond to.