The Phony Science of Predicting Elections

articles
May 31 2000 3:00 AM

The Phony Science of Predicting Elections

Who'll win in November? The experts' guess is as good as yours. 

According to several esteemed political scientists, Al Gore already has the 2000 election in the bag. Friday's Washington Post front page reported that these experts, "who have honed and polished the art of election forecasting … have a startlingly good record predicting election results months in advance." On Meet the Press, Tim Russert reverently quoted one professor who told the Post the election is " not even going to be close." This quadrennial number-crunching ritual doesn't stand up to scrutiny. The principal art these forecasters have honed is the art of spin. And the only startlingly good record they've compiled is a record of dazzling the media. Here's how they do it.

Advertisement

1. Predict the past. Since most election models are recent inventions, most of the elections against which their accuracy is measured predate them. One political scientist, James Campbell of State University of New York at Buffalo, points out that these retrocasts are "not a very stringent test, since the expected votes used to evaluate the equation are used in estimating the equation." In other words, forecasters fit their models to the quirks of a dozen or so elections and then congratulate themselves on how closely their models "predict" the outcomes they used to create the models in the first place. Through the magic of retrocasting, numerous unlikely variables can be correlated with election results. For a comic illustration, click.

2. Predict the obvious. Even knowing the answers ahead of time, many models have an uneven record. University of Wisconsin-Milwaukee's Thomas Holbrook successfully retrocasts the outcome of only 10 of 12 elections, choosing the wrong winner in 1960 and 1968. He says that "only two elections are called incorrectly." But how hard are some of these to predict? The last 12 elections include two unsurprising landslides (1952 and 1980) and the easy re-election of five popular incumbents (in 1956, 1964, 1972, 1984, and 1996). This means Holbrook predicts the winner in only three out of five close elections—little better than a coin toss.

3. Duck the hard calls. The biggest upset of the century was Harry Truman's re-election in 1948. As Campbell notes, 1948 is the only postwar election in which the leader in late-September polls did not win the election. Since most of the models use polling data as an independent variable, most forecasters begin their analyses with 1952.

4. Piggyback on polls. Nearly every model includes some measure of public opinion about the candidates or the incumbent administration. This boils down to predicting how people will vote by asking them ahead of time how they will vote. Through careful analysis, Campbell discovered, astoundingly, that September and October polls are more accurate than June and July polls, so his model incorporates trial-heat data from September Gallup Polls.

5. Cover your bets. Forecasts are generally based on economic and public opinion data that change throughout the year. So each time our experts are asked to make a prediction, they plug in different numbers and get a different result. The Post seems particularly impressed that University of Houston's Christopher Wlezien and the University of Iowa's Michael Lewis-Beck (the professor who said this year's contest is "not even going to be close") separately predicted the outcome of the 1996 presidential election within fractions of a percentage point, "closer to the actual result than the national exit poll." In the October 1996 edition of American Politics Quarterly, Lewis-Beck predicted Clinton would win 54.8 percent of the two-party vote, and Wlezien predicted 54.5 percent. When the votes were counted, Clinton's share was 54.7 percent.

How did Wlezien and Lewis-Beck do it? They issued a series of predictions covering a five-point range. The Wlezien forecast touted by the Post used June 1996 data. But in the same journal, Wlezien recalculated with July data, projecting a 56 percent vote share for Clinton. In the fall 1996 Brookings Review, Wlezien pegged Clinton's share at 55.6 percent, while Lewis-Beck pegged it at 53.3 percent. In May 1996, Wlezien predicted Clinton would get 53 percent, and Lewis-Beck put the number at 50.9 percent—four points off target. The Post overlooks the erroneous May 1996 predictions even though the Post itself published them. If at first you don't succeed, keep guessing, because nobody remembers when you get it wrong.

6. Get lucky in your choice of data. Models can generate alternate projections by using data from different sources as well as different months. Wlezien's model incorporates two independent variables: projected income growth and the incumbent's "job approval" rating. For his July 1996 prediction, Wlezien chose a Gallup Poll that found 57 percent of Americans approved of Clinton's job performance. This was the highest Clinton job approval number in any published poll that month. A CBS poll conducted within days of the Gallup Poll found only a 48 percent job approval rating. Factor in the polls' margins of error, and you've got a range from 44 percent to 60 percent for this variable alone.

7. Shrug off your errors. Lewis-Beck is clearly proud that one of his 1996 predictions was almost dead-on. Yet the model he used in Forecasting Elections predicted a Bush victory in 1992. Yale economist Ray Fair picked the wrong winner in 1996 and 1992, even though he's been refining his model since at least 1976 (he got that one wrong, too). Reading about forecasters' track records is like reading Money magazine's stock and mutual fund picks. They remind you of their successes but seldom mention their failures.

8. Tweak the numbers. Behind the scenes, forecasters spend the four years between elections revising their equations. In some cases, they "respecify" their models by finding new independent variables to work with. In other cases, they simply "re-estimate" their equations, changing the weight they attach to each variable. All they're doing is finding a formula that fits the curve of a few data points. If you're allowed to adjust the shape of your curve each time you get a new data point, why should anyone think your formula has any predictive or explanatory value?

TODAY IN SLATE

Frame Game

Hard Knocks

I was hit by a teacher in an East Texas public school. It taught me nothing.

Chief Justice John Roberts Says $1,000 Can’t Buy Influence in Congress. Looks Like He’s Wrong.

After This Merger, One Company Could Control One-Third of the Planet's Beer Sales

Hidden Messages in Corporate Logos

If You’re Outraged by the NFL, Follow This Satirical Blowhard on Twitter

Sports Nut

Giving Up on Goodell

How the NFL lost the trust of its most loyal reporters.

How Can We Investigate Potential Dangers of Fracking Without Being Alarmist?

My Year as an Abortion Doula       

  News & Politics
Weigel
Sept. 15 2014 8:56 PM The Benghazi Whistleblower Who Might Have Revealed a Massive Scandal on his Poetry Blog
  Business
Moneybox
Sept. 15 2014 7:27 PM Could IUDs Be the Next Great Weapon in the Battle Against Poverty?
  Life
Outward
Sept. 15 2014 4:38 PM What Is Straight Ice Cream?
  Double X
The XX Factor
Sept. 15 2014 1:51 PM Why Not Just Turn Campus Rape Allegations Over to the Police? Because the Police Don't Investigate.
  Slate Plus
Tv Club
Sept. 15 2014 11:38 AM The Slate Doctor Who Podcast: Episode 4  A spoiler-filled discussion of "Listen."
  Arts
Brow Beat
Sept. 15 2014 8:58 PM Lorde Does an Excellent Cover of Kanye West’s “Flashing Lights”
  Technology
Future Tense
Sept. 15 2014 4:49 PM Cheetah Robot Is Now Wireless and Gallivanting on MIT’s Campus
  Health & Science
Bad Astronomy
Sept. 15 2014 11:00 AM The Comet and the Cosmic Beehive
  Sports
Sports Nut
Sept. 15 2014 8:41 PM You’re Cut, Adrian Peterson Why fantasy football owners should release the Minnesota Vikings star.