Business Insider

Nate Silver Versus Princeton Professor: Who Has the Right Models?

Nate Silver is skeptical of other people’s predictions

Photo by Astrid Stawiarz/Getty Images for the 2014 Tribeca Film Festival

This article originally appeared in Business Insider.

FiveThirtyEight’s Nate Silver on Wednesday openly denounced a rival forecaster’s model, writing a lengthy piece describing Princeton University professor Sam Wang’s forecast as “wrong.”

Wang and Silver’s forecasts have diverged significantly on their odds for party control of the Senate in November.  Wang’s model, which relies solely on available polls of races, gives Democrats an 80 percent chance of retaining control of the Senate. It has been much more bullish for Democrats than other forecasters, including Silver. Silver’s forecast, though, has shifted noticeably in Democrats’ favor over the past few days, and his model now gives Democrats a near-even chance of keeping the Senate.

In his post, Silver called out Wang’s model for relying too heavily on polls he says have overestimated Democratic candidates’ chances of winning:

I don’t like to call out other forecasters by name unless I have something positive to say about them—and we think most of the other models out there are pretty great. But one is in so much perceived disagreement with FiveThirtyEight’s that it requires some attention. That’s the model put together by Sam Wang, an associate professor of molecular biology at Princeton.

That model is wrong—not necessarily because it shows Democrats ahead (ours barely shows any Republican advantage), but because it substantially underestimates the uncertainty associated with polling averages and thereby overestimates the win probabilities for candidates with small leads in the polls. This is because instead of estimating the uncertainty empirically—that is, by looking at how accurate polls or polling averages have been in the past—Wang makes several assumptions about how polls behave that don’t check out against the data.

Silver went on to list some examples in which Wang’s model has been wrong—in the 2010 Nevada Senate race between Democratic Majority Leader Harry Reid and Republican Sharron Angle, and in control of the House in 2010.  In both of these cases, Silver wrote, Wang’s forecast heavily diverged from the actual result. Silver’s forecast also got Nevada wrong, but he argued Wang’s model put Reid’s odds of winning around 30,000-to-1. Silver’s own model had Reid at a 5-to-1 underdog.

In an email to Business Insider, Wang said there was some “confusion” in Silver’s article.  “I do not want to turn this into a shouting match—it’s really unnecessary,” he said. “I would say that specifically in reference to PEC’s predictive model, the same methods when applied to presidential races gave a cliffhanger in 2004 and likely Obama wins in 2008 and 2012.” Wang also noted both models ended up missing the Nevada Senate race. “And Senate polls are also good. When poll medians are applied to Senate races, they perform slightly better than polls-plus-fundamentals [models like Silver’s]. In 2010, both approaches worked—though they both got the Nevada Senate race wrong,” said Wang. 

Ultimately, Wang said, the difference is between his model and others like the Huffington Post and Daily Kos, which rely solely on polling averages. The reason they differ is because other models like Silver’s and the New York Times have “said that at the start of 2014, conditions favored the GOP.”

“However, for most of the year, polls have shown that Republicans are slightly underperforming, relative to those expectations,” Wang said. “That’s the real story.”

See Also: If Scotland Votes “No”, It Might be Because of the Queen