Readme

Democracy Is Approximate: Live With It

If George W. Bush is sworn in as president even though Al Gore got more votes, Gore voters (including me) will be disappointed. But should we also feel aggrieved? Is it unjust? The truth is, not very.

The Electoral College used to drive me batty. It’s so irrational! No one seriously tries to defend it as an exercise in representative government, as if the voters were actually choosing someone else to choose the president for them. The modern case for the Electoral College is that it will lead to an unambiguous result. Even a very close popular vote is likely to produce a solid winner and loser after it’s put through this 18th-century contraption. But if arbitrary clarity is what you want, why not just flip a coin whenever the vote is close? Stripped of arcane folderol, that’s what the Electoral College amounts to. Democracy means majority rule, I thought. Any system that can hand victory to the popular-vote loser is perverse and indefensible.

This time around, the Electoral College delivered on its flaw but not its virtue. It spectacularly failed to banish ambiguity and may well crown the man who got fewer votes. And yet the strange events of the 2000 election make this eccentric arrangement easier to swallow. How? By demonstrating that any perfect measure of the people’s will is impossible. Democracy is an approximation, and the Electoral College is probably no more approximate than any other arrangement. Consider some of the flaws in voting itself.

To start, the basic concept of majority rule is arbitrary and unfair. The only perfectly fair voting system is one where all voters get their first choices. In a restaurant, we each vote on what we want for dinner, and there is no need to impose one person’s choice on anyone else. But we can’t each have our own president. Majority rule is a second-best solution: Less than half the voters are denied their preferences. But “rule by three-quarters” or “rule by everybody” (i.e., requiring unanimity) would thwart the wishes of even fewer people. The only advantage of majority rule over these higher standards of agreement is that a simple majority consensus is easier to achieve. In other words, majority rule is a practical but imperfect mechanism for turning the multiple preferences of millions of people into a clear collective decision. Sort of like the Electoral College.

In practice, furthermore, so-called “majority rule” rarely gives a majority of voters their first choices. One reason is that what we really have is “plurality rule.” In the past three presidential elections, no candidate got a majority of the vote. Most voters’ preferences were frustrated. Either Gore or Bush would actually become president with more popular endorsement than Bill Clinton enjoyed. And Election Day is just the last stage of the selection process. As always, millions voted for Gore and Bush only after their real first-choice candidates lost in the primaries. There is a famous scholarly proof that no voting system can sift through multiple preferences with even reasonable efficiency, let alone give most people their first choices.

There was an interesting debate after Nov. 7 about letting contested counties or states vote again. The fairly easy consensus that this would be a bad idea exposes another inherent limit of electoral democracy: An election only reflects voters’ opinions at a particular, arbitrary moment. That moment is now gone. The result last week turned on an infinity of factors that cannot be duplicated—the weather, the last TV commercial people happened to see, what they had for lunch. It turned on their not knowing how important their vote would be. None of this can be recreated. And yet there is no inherent reason that voters’ preferences on Nov. 7 are more valid than their preferences a couple of weeks later. If anything, the opposite is true: The more recent the sounding, the more likely it is to reflect current opinions. The case against a new election is strong: There are rules, and there has got to be closure. But these necessary practical considerations make an election less accurate, not more so, as an expression of the popular will.

Finally, we’ve learned striking things in recent days about the limits of elections at even the most mundane level of accurately eliciting voters’ current preferences and adding them up. Do you suppose that the elderly citizens of Palm Beach County are the only ones who misunderstood the ballot or marked it incorrectly? Or that Florida is the only state where—with no evidence of fraud—the vote count is off by a few hundred or a few thousand? The sum total of these errors nationwide surely is more than the difference between Bush and Gore in the official count.

Both in theory and in practice, then, elections simply cannot measure the popular will as accurately as last week’s results seem to demand. One way or another, it is a flip of the coin. The Electoral College is not a bad way. It’s here. It’s got a comforting patina of age. And at least it limits the second-guessing to one or two states.

And how unfair is it? No matter how this melodrama ends, the presidential preferences of some 47 million people will be thwarted. That’s the tragedy of democracy, and this built-in unfairness is true of all elections under any set of rules. The preferences of those who voted for the apparent popular-vote loser are just as valid as the preferences of those who voted for the apparent popular-vote winner. It’s simply that there are fewer of the former. So even if the Electoral College ends up satisfying the loser’s supporters and thwarting the winner’s, how much unfairness does that imply? In an election this close, the answer is “not much.” And in any election close enough for the Electoral College to change the result, no other process is likely to be any fairer.