Politics

Silver Medal

Obama’s big win does not mean Nate Silver is a towering electoral genius.

Nate Silver

Randy Stewart from Seattle, WA, USA

It’s well after midnight on the East Coast, and the results are in: Nate Silver has won the 2012 presidential election by a landslide. His magic formula for predictions, much maligned in some corners in recent weeks, appears to have hit the mark in every state—a perfect 50 green M&Ms for accuracy. Now my Twitter feed is blowing up with announcements of his coronation as the Emperor of Math and the ruler of the punditocracy. Wait—it was even more than that, they say: a victory for blogging, and also one for rational thought. He proved the haters wrong! He proved science right! Is this guy getting lucky tonight or what?

But all these stats triumphalists have it wrong. Nate Silver didn’t nail it; the pollsters did. The vaunted Silver “picks”—the ones that scored a perfect record on Election Day—were derived from averaged state-wide data. According to the final tallies from FiveThirtyEight, Obama led by 1.3 points in Virginia, 3.6 in Ohio, 3.6 in Nevada, and 1.9 in Colorado. He won all those states, just like he won every other state in which he’d led in averaged, state-wide polls. That doesn’t mean that Silver’s magic model works. It means that polling works, assuming that its methodology is sound, and that it’s done repeatedly.

To be fair, the art of averaging isn’t simple. You can’t just comb through Google News and add up all the numbers that you find; to find a useful sum, Silver judges which polls to put into his analysis and then he weighs them according to his perception of their quality. He may be very good at this, but other stat-head pundits do more or less the same, and their averages match up accordingly. Yes, Silver had Obama in the lead, but so did RealClearPolitics and Talking Points Memo; where he had Romney, so did they. “Our state-by-state forecasts are extremely similar to those issued by our competitors,” he wrote in a post two weeks ago, entitled “State Poll Averages Usually Call Election Right.” Those numbers nailed it on 50 out of 50 states, as they often do.

So picking winners state by state was the easy part. Anyone who glossed the numbers would have made the same projections. But Silver’s model promised more than that: He offered assessments of his confidence in each state’s results. The fact that Obama led in Ohio polls made it obvious that he should be the favorite, but what if those Ohio polls were wrong? How much risk was there in trusting state-wide averages? This was Silver’s nifty contribution: He assigned that risk a probability, by looking at some other factors, such as polling trends and local demographics. Take the example of Virginia, where Obama led by 1.3 percentage points. Picking him to win the state was a no-brainer since he was leading in the polls, but Silver used his secret sauce to calculate the chances that those polls were wrong. According to his calculations, the risk was 21 percent, meaning that Obama’s odds to win the state were roughly 4-to-1.

What do the day’s returns tell us about the accuracy of Silver’s model? Nothing much. The fact that Obama won Virginia looks good for averaged polling—indeed, his margin appears to be a couple points, not far off from what was predicted—but we’ll never know about that other part. Did Obama really have a 79 percent chance of winning? To get a sense of that, we’d need to run yesterday’s election like a lab experiment, doing it 10,000 times to see how often Obama wins. Since that can’t happen, we’re left to scratch our heads.

You could even make a case that Silver’s estimates were off. His averages had Republican Senate candidate Rick Berg up by 3.9 in North Dakota, enough to give him a 92 percent chance of winning. Now it looks like he could lose to Democrat Heidi Heitkamp. In Nevada, Silver gave Republican Dean Heller an 83 percent chance of winning based on a 4.7-point lead in the averaged polls. As of now, Heller’s up by just 12,000 votes.

But Montana is the most telling case: According to Silver’s polling average, the Democratic candidate, Jon Tester, led by 1.4. But Silver’s model, which uses “state fundamentals” among other factors, guessed the polls were wrong, and gave his opponent Denny Rehberg a 66 percent chance of winning. So far as I can tell, this was the only contest in which the magic model went against the averaged polls. Guess what? The projections might have been a little off: Tester has a 5-point lead.

It’s possible that Silver’s predictions were weaker than normal in North Dakota and Montana because polling in those states was sparse. Again, the polls are more important than the poll watcher—good polling yields good predictions. In the future, I’ll be curious to see how Silver’s model does in cases like Montana, where it picks the polling underdog. Does his secret sauce yield some unexpected scores—winners who surprise the pundits and the pollsters—or does it just distract us from the obvious?

Silver lovers aren’t waiting for these comparisons. They’re riding high on victory, and giving credit to the bearer of good news. In doing so, they’ve made the same mistake that Silver’s critics made last week: They’ve confused his projected odds with hard-and-fast predictions, and underestimated the accuracy of polling. The fact that Obama won doesn’t make Nate Silver right, any more than a Romney win would have made him wrong.