Q

Can We Believe the Polls?

The opinion polls tell us that the presidential race is “too close to call.” But over the last few weeks, they have also shown wide swings in each candidate’s standing in the polls. Journalists use these changes to write about “rallies,” “breakthroughs,” “stumbles,” and “critical turning points.” Maybe; perhaps these swings can be explained by speeches, debates, and policies. But no one really knows whether these causes are real or imaginary. In fact, no one can be certain that the race is too close to call.

In mid-September, a Newsweek poll showed Al Gore ahead by 14 points; a week later, the same poll showed him ahead by only 2 points. Did the voters really change that much in one week? I doubt it.

The reason for this uncertainty is that polling is as much art as science. Recently, VitalSTATS, the newsletter of the Statistical Assessment Service, a nonprofit organization that tries to help journalists make sense of numbers, issued a special report on the polls. (You can look up the newsletter at www.stats.org.)

The last survey taken by any of the major national opinion pollsters is usually fairly close to the actual election results, albeit with some big exceptions to be noted in a moment. But before that final week, the polls are often wrong by a wide margin.

One reason is that the pollsters must choose between surveying either registered voters or likely voters. Studying the first group does not make much sense, since a large number of people who are registered don’t bother to vote. Studying the second group makes sense—if you know who they are. On Oct. 19, 1996, the Gallup/CNN/USA Today poll showed President Clinton leading Bob Dole by 18 points. Ten days later, the same poll found Clinton leading by only 1 point.

The reason for the change is that the poll on Oct. 19 surveyed registered voters, while on Oct. 26 it surveyed likely voters. But neither survey got the results right. In the election, Clinton beat Dole by 6 points. It is obvious why surveying registered voters makes little sense; nobody knows which of them will really cast a ballot. But surveying likely voters is also hard, because no one knows until Election Day who will really vote.

Pollsters use a variety of screening questions to try to make a reasonable guess. These questions are often a proprietary secret, and there are hardly any published scientific studies that show which questions really work. But we do know a few things. One is that young and low-income voters often have very low turnouts. Perhaps the pollsters know how to take this into account.

But suppose they don’t know how. This means that they run the risk of counting as likely voters people who won’t really vote. And since these groups are more likely to vote Democratic than Republican, it means that they will overestimate the Democratic vote and underestimate the Republican one. In 1980, the last Gallup Poll published just before the election showed Reagan with 47 percent of the vote. In fact he got 51 percent.

Matters were even worse with polls published in early October. In 1988, they showed the senior George Bush with 49 percent of the vote; on Election Day he won 54 percent. In 1992, the October polls showed Clinton with 52 percent of the vote; in November he received 43 percent. Same story in 1996: In October, the polls said Clinton had 55 percent of the vote; he actually got 49 percent.

No one knows whether the current polls also underestimate the Republican vote. But if they do, as has been common in the past, Bush leads Gore.

There is another, more recent problem with polling. Thirty or 40 years ago, there were only two big polling organizations, Gallup and Harris. And when they called you at home, it was a big deal. Today there are a dozen pollsters, and their calls are interspersed amid annoying telemarketing calls from fund-raisers, carpet washers, aluminum-siding salesmen, and guys who clean out your heating ducts. When I get any of these calls, I slam down the phone. So do millions of others.

As a result, the response rate for pollsters has fallen sharply. Today, 60 percent or more of the people called by a pollster refuse to answer the questions. This wouldn’t be a problem if those who refused were a random sample of those called, but they are not.

What should a pollster do when somebody refuses to talk? All they can do is call someone else. To get 2,000 answers, a pollster has to call 31,000 people. Are the 2,000 who answer as representative a sample of the public as are the 31,000 who are called? We don’t really know. But pollster John Zogby, who has done well at predicting elections, said in a recent radio interview that Republicans are more likely to slam down the phone than are Democrats. If that is true, then a low response rate also overestimates the Democratic vote.

Today the press always reports the “sampling error,” a statement that means that in 95 cases out of 100, the true answer to the poll question is within (depending on the sample size) 3 percentage points plus or minus. That is a useful fact—if the poll is based on a representative sample of Americans. But for the reasons I have indicated, we don’t know that it is, at least not until just before Election Day when the likely voter is easier to identify.

But the press doesn’t tell you that. Instead it tells you about “surges,” “stumbles,” and “critical turning points.” These things may exist, but you can’t prove it by the polls.