Likely Voters” Lie: Why Private Campaign Polls Get Such Different Results From Public Media Polls.

The new science of winning campaigns.
Dec. 15 2011 12:58 PM

“Likely Voters” Lie

Why private campaign polls get such different results from public media polls.

(Continued from Page 1)

Campaign pollsters usually purchase these lists from vendors who have compiled the local voter lists into national databases, then merged voter names with telephone numbers. (Catalist, the dominant Democratic data vendor, claims it has phone numbers for 88 percent of active registered voters.) Campaigns then feed what they learn from doorstep visits and phone-bank callers back into the databases, where it supplements individual demographic information. Over the last decade, campaigns have become deft at using microtargeting algorithms to analyze all that information to produce a unique score predicting how likely someone is to vote, a determination that is more nuanced and dynamic than just counting how many of the last four elections they had attended.

Working from such lists, campaign pollsters are able to define the universe of potential interviewees, ruling out any chance of randomly dialing noncitizens, the unregistered, or wrong numbers.  But it’s about more than cutting call-center costs.  Campaigns already rely on voter files, and the scores that microtargeting algorithms pull from them, to set their vote goals. Now by pulling their polling samples from the same pool of data they use to count votes, strategists have synchronized their assumptions about who is likely to turn out across different parts of the campaign.

The private poll filters are far more rigorous than anything public pollsters do. “We’re not experts in turnout,” says Des Moines-based pollster Ann Selzer, whose Iowa Poll for the Des Moines Register is widely regarded as the most reliable of the state’s surveys. For general elections, Selzer randomly dials Iowa numbers and asks to speak to a registered voter. During caucus seasons, she calls from a list of voters made available by the secretary of state, and discriminates by party. Now, she’s randomly dialing those registered as Republicans or independents to “get rid of any Democrats right off the bat.” Once she reaches a particular voter, Selzer relies on a screen to filter out her unlikely voters.

Advertisement

Given the different methods, public polls are likely more volatile than those being used by campaigns to guide strategy and tactics, since they rely only on a momentary declaration of interest. After all, an infrequent voter who gets ginned up by a Rick Perry ad right before the pollster calls will get past a likely-voter screen, while a campaign surveying past caucus-goers would never ring her in the first place. In a race like this one, where Republican primary voters have sent mixed messages about their enthusiasm for the choice they face, internal polls will sample a more stable electorate. Those candidates, like Ron Paul, who aim to enlist new caucus-goers probably look stronger in media polls than those commissioned by campaigns—perhaps one reason his rivals might not take him so seriously.

Indeed, campaigns have stumbled in part because their insular polls have offered an insular view of the electorate. Selzer noted that in 2008 Hillary Clinton’s campaign predicted Iowa turnout based on past patterns, using costly rosters of prior caucus attendees controlled by the state party. Many of those who caucused for Barack Obama would not even have appeared in voter databases or caucus lists because they were not previously registered. “When anyone tells you they have a turnout model, you should be suspicious,” Selzer says. “The best predictor of the future is past behavior—until there’s change.” (Even Obama’s campaign dramatically underestimated turnout in 2008.) 

In fact, when they checked Greenberg Quinlan’s self-identified likely voters against their eventual 2008 turnout, Aida and Rogers found that people most accurately described their future behavior when the prediction matched what they had done in the past. Among respondents who had voted in both of the previous two elections, 93 percent of those who said they would vote did so; only 24 percent of those who said they would not vote actually failed to vote. (A similar pattern held among those who had not voted in the past two elections.)

One possible reason that regular voters might consistently declare their lack of interest in voting, Aida and Rogers speculate, is “to convey disaffection toward the political process rather than a sincere lack of intention to vote.” The question of whether it’s better to include such people in a poll or just leave them out altogether remains open. “If I can’t trust them to be honest about whether they’re going to vote or not,” asks McHenry, “how can I trust them on all the other questions I want to ask them?”