Who nailed the election results?

Who nailed the election results?

Who nailed the election results?

Science, technology, and life.
Dec. 9 2004 5:46 PM

Let's Go to the Audiotape

Who nailed the election results? Automated pollsters.

A week before the election, Slate published a consumer's guide in which we disclosed each pollster's methods and how they might affect that survey's numbers relative to the election returns. Now the returns are in, for pollsters as well as the public. Which polls nailed the results, which blew it, and why? Our review suggests three factors were crucial.

1. Party identification. Democrats have consistently outnumbered Republicans in surveys since FDR. Several of the pollsters we examined in October assumed that the turnout in 2004 would be close to the average of the last three presidential elections, which was more Democratic than Republican by several percentage points.


The pollsters who ran the Battleground survey disagreed. They assumed that the electorate would be 42.3 percent Democratic and 42.3 percent Republican. That split, negotiated between the Republican and Democratic companies that conducted the poll, looked to us like an unscientific political compromise. The Pew survey looked even crazier for projecting that Republicans would outnumber Democrats, 37 percent to 35 percent.

But guess what? On Election Day, exit polls showed Republicans matching Democrats 37 percent to 37 percent. Pollsters who assumed that historical patterns would temper the Republican intensity in this year's surveys got it wrong. Those who bet on the data instead of the historical patterns got it right.

2. Undecided voters. Historically, last-minute undecideds have broken decisively for the presidential challenger. Based on this pattern, Gallup allocated 90 percent of its undecideds to Kerry, lifting him into a tie with Bush at 49 percent. TIPP made a similar bet on the 4.4 percent of voters in its final survey who said they were still "not sure" whom to vote for. TIPP allocated 61 percent of this group to Kerry and only 34 percent to Bush.

Slate's Election Scorecard page went further. Alongside our projection of each state based on its final polls, which yielded a 269-269 Electoral College tie (we got Florida and Wisconsin wrong), we issued a separate "vote-share" projection that allocated undecideds as follows: 1) enough to third-party candidates to match the showing by those candidates in the same state in 2000; and 2) the remainder to Kerry. This model yielded a Kerry victory with somewhere between 276 and 291 electoral votes.

Oops! According to exit polls, Bush got 46 percent of those who made up their minds in the last week of the campaign and 44 percent of those who made up their minds in the final three days. TIPP got it wrong, Gallup got it very wrong, and Slate's vote-share formula got it very, very wrong. Who got it right? Pew again. In its final report, Pew predicted that undecideds "may break only slightly in Kerry's favor." With 6 percent of voters undecided in the week before the election, Pew added 3 percent to Bush's total and 3 percent to Kerry's. 

3. Automation. Before the election, we publicly doubted and privately derided Rasmussen and SurveyUSA, which used recorded voices to read their poll questions. We rolled our eyes when they touted the virtues of uniformity and when they complained that live interviewers "may not know how to read or speak the English language," could "chew gum," or might "just make up the answers to questions." It sounded to us like a rationalization for cutting costs.

Look who's laughing now. Rasmussen and SurveyUSA beat most of their human competitors in the battleground states, often by large margins.

Let's compare the automated surveys to the three biggest pollsters who used live interviewers in multiple battleground states. We'll grade each pollster on two measures: 1) how far its final numbers for Bush and Kerry varied from the official returns, and 2) how far the gap between its final numbers for Bush and Kerry varied from the gap shown in the official returns. For example, suppose a pollster had Bush winning a state 48 to 46 percent, but Bush actually won the state 50 to 47. By the first measure—let's call it the sum—the poll missed Bush's number by 2 and Kerry's by 1, for a total error of 3. By the second measure—let's call it the spread—the poll's 2-point lead for Bush missed the actual 3-point lead for Bush by a total error of 1.