Dialogues

Are Campaign Polls Sleazy?

Charles Cook is editor of the Cook Political Report and a political analyst for the National Journal and CNN. William Saletan is a Slate senior writer. Saletan penned this “Frame Game” arguing that “every campaign poll that asks about an opponent’s flaws is a push poll,” and that “real polls” can be just as invidious. In response, Cook posted this message  in “The Fray,” Slate’s reader feedback forum. Slate has asked them to continue their discussion about the merits and perils of campaign polling in this “Dialogue.”   

Dear Charlie,

I agree that we should banish the term “push polling” from political reporting, since these calls (I like your term “negative phone banks”) have nothing to do with polling. And I agree that the story of the 14-year-old in South Carolina is fishy. Any phone-bank staffer who spends his time badmouthing John McCain to a 14-year-old is too dumb to pose a threat to democracy. And I agree that opposition research about personal matters is more pernicious than anything in the polling industry.

But I don’t see you addressing the point of my column: that campaign pollsters are paid not “to figure out what people are thinking” but “to figure out how to make people think what the campaign wants them to think”—and that because negative campaigning is effective, “push” questions in “legitimate” polls are designed to test pre-spun attacks.

It’s true that your pollster often tests your opponent’s pre-spun attacks instead of your own. But they’re still pre-spun attacks. Your pollster isn’t actually delivering your opponent’s attacks, any more than he’s delivering your own. He’s just testing them. So I’m not blaming campaign pollsters for pre-spun attacks. What I’m criticizing is the pretense that they’re doing objective science—that they’re in the business, as Stu Rothenberg puts it, of “measuring” public opinion rather than “altering” it. What they’re actually measuring is how each pre-spun attack—yours or your opponent’s—alters public opinion.

Your own experience in a campaign illustrates the point. You recall, “We beat up our candidate badly, then built up our opponent, then asked the trial-heat question one final time. … We then were able to look at the political and demographic attributes of those that defected from us to either undecided or to our opponent, so we could see whom we should focus the rest of the campaign on.” Your push questions tested pre-spun attacks that were designed (hypothetically) to “beat up” your candidate. In a negative campaign environment, this is usually followed or accompanied by testing the pre-spun attacks one’s own candidate might “focus” on in order to win back the voters who prove susceptible to the opponent’s attacks. These questions aren’t designed to find out what the voters want the campaign to be about. They’re designed to find out which voters your opponent can drive away from you with various attacks, and how you can either win back those voters or at least drive them away from your opponent.

I agree with you that from the standpoint of precision and accuracy, campaign polls “conducted by respected professionals” are often “very high-quality survey instruments,” whereas many media polls suffer from “interviewer bias.” And I agree that pollsters “of the highest integrity” don’t put their names on memos or press releases “tainted” by misrepresentation or selective use of their data. But why are these the only standards to which campaign pollsters should be held? Why should they be “respected” just because they don’t fudge their numbers? And why should campaign polls be deemed “high-quality” rather than “tainted” just because they’re conducted by well-trained interviewers?

I’m arguing that campaign polls are tainted by a different, deeper kind of bias: the bias of loaded questions. The distortion of McCain’s tobacco and campaign reform legislation in Bush’s South Carolina poll is an excellent example. I’m not saying that Bush’s pollsters hired untrained college students to ask those questions. I’m not saying that they lied about or selectively reported the responses to those questions. What I’m saying is that 1) these standards completely overlook the substantive bias of the questions; 2) this substantive bias reflects the fact that campaign polling is at least as much about figuring out how to manipulate voters as about “learning” from them; and 3) the failure of political professionals and reporters to notice, understand, acknowledge, or care about this fact is part of the problem.

In short, I agree with you that in the business of cleaning up campaigns, there are other problems worth tackling before this one. The reason I’m drawing attention to this one is that unlike the others, it’s so deeply ingrained in the profession that nobody even recognizes it.

Will