Likely Voters” Lie: Why Private Campaign Polls Get Such Different Results From Public Media Polls.

The new science of winning campaigns.
Dec. 15 2011 12:58 PM

“Likely Voters” Lie

Why private campaign polls get such different results from public media polls.

Voters at a polling station.
A new paper suggests that pollsters asking voters if they plan to vote is no better than flipping a coin

Photograph by Digital Vision/Thinkstock.

After the 2008 election, the Democratic polling firm Greenberg Quinlan Rosner collected the names of 12,979 people who had, over the course of the year, described themselves to call-center operators as “almost certain to vote” or “will probably vote” and checked to see if they actually had. Masa Aida, a Greenberg analyst, matched respondents’ names to a voter file maintained by the database vendor Catalist, and found that citizens had an unimpressive record of predicting their own behavior: Eighty-seven percent of those who described themselves as “almost certain to vote” that November had done so, compared to 74 percent of those who said they “probably” would.

Sasha Issenberg Sasha Issenberg

Sasha Issenberg is the author of The Victory Lab about the new science of political campaigns.

This wasn’t much of a surprise. Political scientists have found for years that citizens, aware of the socially correct answer, routinely lie to claim they voted when they had not. But Aida, along with Harvard public policy professor Todd Rogers, did something that past researchers hadn’t. They also looked up the records of those who had said they “will not vote,” an answer that prompts the operator to politely end the call and dial someone else. Greenberg Quinlan had excluded those people from their surveys, but Aida and Rogers found they were lying too, and at a higher rate than those who identified themselves as certain voters. Despite claiming they would not cast a ballot, 55 percent had. More than half the people whom Greenberg Quinlan call-center operators kicked off the line should not have been.

That “likely-voter screen,” the battery of questions (Gallup’s contains seven of them) asking people how probable they are to vote, is a staple of just about every survey you see these days. “Public polls”—the ones that now pop up with metronomic regularity under the banner of news organizations and colleges—typically rely on such screens to filter likely voters from the much broader pool of people they get on the line by randomly dialing numbers. In their unpublished paper, Aida and Rogers poked big holes in the screen, suggesting in some cases it was no better than flipping a coin in determining who was likely to vote.

Advertisement

This is one reason why the likely-voter screen is becoming an afterthought in the parallel world of private polls—the ones commissioned from partisan firms to guide strategy, and rarely seen outside campaign war rooms. In fact, a large methodological gap has now opened between the surveys candidates and their strategists see and the ones you do. Campaigns are, in essence, relying less on voters’ honesty and self-awareness to determine who is a likely voter. Instead, they’re using their own statistical work to presume who is likely to vote, and surveying based on those statistical guesses.

“I certainly have more faith in even a Democratic poll than a media poll,” says Jon McHenry, a Republican pollster whose firm, Ayres McHenry, is working for the Our Destiny super PAC backing Jon Huntsman. “I trust that the Democratic firm is doing it the same way our firm is doing it.”

The difference between the public and private approaches has particular consequences in Iowa, where the record of who has turned out for past caucuses is a closely held secret unavailable to most media or academic pollsters. But there’s another reason for the public-private gap, which news organizations rarely mention when they put out new numbers and frantically rearrange daily coverage around them: Many of their polling strategies are not developed with the foremost goal of assessing the horse race. Instead they want stable datasets that allow them to track public-opinion trends on noncampaign issues, such as abortion or presidential approval, even in election off-years. 

“The main thing is we want to know what everybody thinks, not just voters,” says Washington Post polling manager Peyton Craighill, who worked for Greenberg Quinlan before making the jump to media. “The political pollsters all have their own special sauce.”

So how do campaign pollsters operate differently? Instead of randomly dialing digits in a given area code and then imposing a screen to sift out nonvoters—the public polling strategy—campaign pollsters usually begin with a list of registered voters available from state election authorities. Those lists include individual voting histories, which campaigns have always used to individually identify their targets. It makes sense in a primary to send a canvasser to first knock on the door of those who regularly vote in primaries, or give a phone-bank volunteer a list of those who had voted in two of the last four elections rather than a registered voter who hadn’t turned out during that time.