The hand counting of ballots in Florida has been hailed by its supporters for increased accuracy. And it has been criticized by opponents for its inaccuracies.
But aside from the handicaps of each recount method—the political vulnerabilities of chad and the rigidities of machines—the problem with a hand count may not be that it is inaccurate or accurate, but that it may be too accurate.
This is because of the nature of "digital sampling." We are most familiar with digital sampling from CD players. An analog, smooth wave of sound is "sampled" at particular intervals (over 44,000 times a second for CDs), and each sample yields a 1 or a 0—for a higher or a lower movement of the sound wave. This string of 1s and 0s are meant to be approximations of the wave.
This is a lot like voting. Each of us carries around hundreds of judgments, opinions, prejudices, loyalties, many of them conflicting. But when we vote, we are being sampled. That jumble of sentiment is turned into a 1 or a 0—a Gore or a Bush (let's assume a two-party system).
The sampling then continues at a higher level. Those 1s and 0s are counted within districts, which may then yield another 1 or 0 for local issues. The Electoral College takes a statewide mass of 1s and 0s, counts them up, and turns them all into another 1 or 0 (I use 1 or 0 just to signify a Bush/Gore vote by the state; I am not weighting those results nationally). Nationally, of course, this happens as well. A single 1 or 0 is determined from sampling millions of pieces of digital data. That digit is our president.
How do these digital translations take place? Though voting procedures are determined regionally, they must satisfy federal election law. So, there is some uniformity region to region. But there are also great differences. In some cases there are manual machines, in others paper ballots, at some point there will be Internet voting. But the implicit assumption is that every single one of these methods is fallible. Some are more fallible than others, perhaps, but there is a range of error that is considered acceptable.
Let's say the tolerable range of error is 2 percent. Errors in counting, authentication, absentee balloting, execution, enforcement, etc., may vary from near zero (very rare, perhaps even impossible) to about 2 percent. These various errors are considered commonplace. We generally assume that they can be ignored; they may even cancel each other out across the country. But if a particular district's errors increase, we see a vote as corrupt or invalid. If a district can only guarantee an accuracy of plus or minus 50 percent, for example, it is obviously doing something wrong.
In the case of Florida, there was no such impropriety, but the vote itself was close enough that the range of error made it plausible that the election could have gone the other way. So, a recount was justified just as recounts would have been justified in any other region with similar percentage differences.
But what happens as the scrutiny on a district increases past that recount, as is now taking place? It changes the principles of the election itself. It creates another level of accuracy, one that bears no relation to the rest of the country. It changes the digital sampling rate. It means that the test of accuracy has been changed from, say, an acceptable 2 percent error rate to an acceptable .2 percent error rate.
In a close election, this means that a manual recount (which can include holding each ballot up to the light and making minute judgments about chad and punctures) might materially alter the digital result in a single district. But a microscopic recount in a particular district should not swing an entire election unless it is assumed that that level of accuracy can be uniformly applied in all districts without materially changing things. There were so many close local elections this year that such microscopic recounts, uniformly applied across the country, would cause vast swings in results simply because small variations yield digital differences.