Articles

The Phony Science of Predicting Elections

Who’ll win in November? The experts’ guess is as good as yours. 

According to several esteemed political scientists, Al Gore already has the 2000 election in the bag. Friday’s Washington Post front page reported that these experts, “who have honed and polished the art of election forecasting … have a startlingly good record predicting election results months in advance.” On Meet the Press, Tim Russert reverently quoted one professor who told the Post the election is ” not even going to be close.” This quadrennial number-crunching ritual doesn’t stand up to scrutiny. The principal art these forecasters have honed is the art of spin. And the only startlingly good record they’ve compiled is a record of dazzling the media. Here’s how they do it.

1. Predict the past. Since most election models are recent inventions, most of the elections against which their accuracy is measured predate them. One political scientist, James Campbell of State University of New York at Buffalo, points out that these retrocasts are “not a very stringent test, since the expected votes used to evaluate the equation are used in estimating the equation.” In other words, forecasters fit their models to the quirks of a dozen or so elections and then congratulate themselves on how closely their models “predict” the outcomes they used to create the models in the first place. Through the magic of retrocasting, numerous unlikely variables can be correlated with election results. For a comic illustration, click.

2. Predict the obvious. Even knowing the answers ahead of time, many models have an uneven record. University of Wisconsin-Milwaukee’s Thomas Holbrook successfully retrocasts the outcome of only 10 of 12 elections, choosing the wrong winner in 1960 and 1968. He says that “only two elections are called incorrectly.” But how hard are some of these to predict? The last 12 elections include two unsurprising landslides (1952 and 1980) and the easy re-election of five popular incumbents (in 1956, 1964, 1972, 1984, and 1996). This means Holbrook predicts the winner in only three out of five close elections—little better than a coin toss.

3. Duck the hard calls. The biggest upset of the century was Harry Truman’s re-election in 1948. As Campbell notes, 1948 is the only postwar election in which the leader in late-September polls did not win the election. Since most of the models use polling data as an independent variable, most forecasters begin their analyses with 1952.

4. Piggyback on polls. Nearly every model includes some measure of public opinion about the candidates or the incumbent administration. This boils down to predicting how people will vote by asking them ahead of time how they will vote. Through careful analysis, Campbell discovered, astoundingly, that September and October polls are more accurate than June and July polls, so his model incorporates trial-heat data from September Gallup Polls.

5. Cover your bets. Forecasts are generally based on economic and public opinion data that change throughout the year. So each time our experts are asked to make a prediction, they plug in different numbers and get a different result. The Post seems particularly impressed that University of Houston’s Christopher Wlezien and the University of Iowa’s Michael Lewis-Beck (the professor who said this year’s contest is “not even going to be close”) separately predicted the outcome of the 1996 presidential election within fractions of a percentage point, “closer to the actual result than the national exit poll.” In the October 1996 edition of American Politics Quarterly, Lewis-Beck predicted Clinton would win 54.8 percent of the two-party vote, and Wlezien predicted 54.5 percent. When the votes were counted, Clinton’s share was 54.7 percent.

How did Wlezien and Lewis-Beck do it? They issued a series of predictions covering a five-point range. The Wlezien forecast touted by the Post used June 1996 data. But in the same journal, Wlezien recalculated with July data, projecting a 56 percent vote share for Clinton. In the fall 1996 Brookings Review, Wlezien pegged Clinton’s share at 55.6 percent, while Lewis-Beck pegged it at 53.3 percent. In May 1996, Wlezien predicted Clinton would get 53 percent, and Lewis-Beck put the number at 50.9 percent—four points off target. The Post overlooks the erroneous May 1996 predictions even though the Post itself published them. If at first you don’t succeed, keep guessing, because nobody remembers when you get it wrong.

6. Get lucky in your choice of data. Models can generate alternate projections by using data from different sources as well as different months. Wlezien’s model incorporates two independent variables: projected income growth and the incumbent’s “job approval” rating. For his July 1996 prediction, Wlezien chose a Gallup Poll that found 57 percent of Americans approved of Clinton’s job performance. This was the highest Clinton job approval number in any published poll that month. A CBS poll conducted within days of the Gallup Poll found only a 48 percent job approval rating. Factor in the polls’ margins of error, and you’ve got a range from 44 percent to 60 percent for this variable alone.

7. Shrug off your errors. Lewis-Beck is clearly proud that one of his 1996 predictions was almost dead-on. Yet the model he used in Forecasting Elections predicted a Bush victory in 1992. Yale economist Ray Fair picked the wrong winner in 1996 and 1992, even though he’s been refining his model since at least 1976 (he got that one wrong, too). Reading about forecasters’ track records is like reading Money magazine’s stock and mutual fund picks. They remind you of their successes but seldom mention their failures.

8. Tweak the numbers. Behind the scenes, forecasters spend the four years between elections revising their equations. In some cases, they “respecify” their models by finding new independent variables to work with. In other cases, they simply “re-estimate” their equations, changing the weight they attach to each variable. All they’re doing is finding a formula that fits the curve of a few data points. If you’re allowed to adjust the shape of your curve each time you get a new data point, why should anyone think your formula has any predictive or explanatory value?

9. Add loopholes. Political scientists claim that econometric models can explain elections because voting follows the same scientific laws year after year. Yet a prior Holbrook model adds a special variable for the elections of 1964 and 1972 to account for the “extremist” ideologies of Barry Goldwater and George McGovern. Fair (who, unlike most, tries to explain elections all the way back to 1916) adds a special variable for the three elections he believes were strongly influenced by war: 1920, 1944, and 1948. But when an election-year war doesn’t fit the equation, as in 1968, Fair leaves that variable out.

10. Blame the lack of data. Holbrook told the Post that the 13 elections he analyzes are too small a sample, saying that with 30 cases he’d be much more confident in his model. This assumes that subsequent elections would clarify rather than complicate the range of data to be explained and the array of factors that might explain them. Analyzing elections 120 years apart using the same model is like trying to figure out whether Babe Ruth was a better hitter than Mark McGwire. They didn’t face the same pitchers, they played in different stadiums, and the balls are manufactured differently today than in Ruth’s day. Similarly, the transition from an industrial to a service economy, the change from one-earner to two-earner households, and the rise of the investor class make it a stretch to compare attitudes about the economy across generations.

At bottom, the models rest on three flaws. First, they assume what they’re supposed to prove. They exclude factors such as the strengths of each candidate and each campaign, simply because political scientists don’t know how to measure them. Campbell, for instance, decides not to incorporate the candidates’ positions on issues in his model, since this factor is too “subjective” and “extremely cumbersome to calculate.” In their efforts to provide “explanations” and “an understanding of what actually causes the vote on Election Day,” the forecasters delude themselves: They can’t predict or explain elections, because their models don’t comprehend any aspect of human behavior that can’t be quantified.

Second, the models boil down to truisms. They reduce elections to two independent variables: One measure of the health of the economy, and one measure of incumbent or candidate popularity. The values and coefficients they attach to these variables don’t hold steady over time, but the principles do: People are inclined to vote with their pocketbooks, and popular candidates tend to get elected. Imagine that.

Third, the models separate objective conditions from the subjective advocates who present them to the electorate. As Russert put it to James Carville and Mary Matalin, econometric forecasts imply that what’s going on in the campaign now is “meaningless,” because, “It’s the economy, stupid.” But that phrase, coined by Carville in 1992, made the opposite point. He wasn’t forecasting the election’s outcome. He was reminding the campaign staff that the economy was a winning message for the campaign. The economy matters in part because candidates and campaigns make it matter, a subtlety lost on the number-crunchers.