The Red Sox, as the whole world knows, won the World Series in commanding fashion by dispatching the St. Louis Cardinals in four straight games. Few people would dispute that the Red Sox are the best team in baseball. But is the best-four-out-of-seven World Series really the best way to figure that out?
A mathematician views the World Series as an algorithm—that is, a formal procedure for solving a problem. In this case, the problem is to determine which of two baseball teams is better. The World Series algorithm goes like this: "Play until either the Red Sox or Cardinals win four games. Once either the Red Sox or the Cardinals win four games, conclude that they are the better team."
If the World Series were "best 51 out of 101" rather than "best four out of seven," the better team would win almost all the time; that's the way probabilities shake out in the long run. But a hundred-game World Series would test the patience of even the most die-hard baseball junkies. Ideally, we'd like an algorithm that
- has a very high chance of picking the better team, and
- picks a winner in as short a time as possible.
This is the same fundamental trade-off that designers of clinical trials face. A study involving a huge number of patients is much more reliable than a smaller trial but also takes longer and costs more.
It's hard to meet both goals at once. Some sports, like football, prioritize the second criterion by holding a one-game-for-all-the-marbles championship. Others, like tennis, emphasize the first: The requirement that you have to win a game by two points makes it harder for the weaker player to luck out. The problem is that, in theory, a game can go on forever.
The World Series falls somewhere in between the football model and the tennis model. But is it possible that we can do even better than the current World Series algorithm? That is, can we construct a model that both picks the better team more often and, on average, picks a winner in a shorter number of games?
Yes, we can. Suppose we play an "Alternate World Series" that ends when a team is up 3-0, 4-1, 4-2, 5-3, or 5-4. The AWS turns out to be shorter than the standard World Series; the chance of the series going eight or nine games is balanced out by the possibility of a 3-0 skunking. What's more, the AWS has a slightly better chance of selecting the stronger team than does the real World Series. That makes sense—with apologies to the 2004 Red Sox, a series where one team goes up three games to none is pretty much a blowout, and we won't lose much by pulling the plug. For every 2004 ALCS, there are a lot more copies of the 2004 World Series. On the other hand, the AWS lets us see at least two more games if the series goes to 3-3, a circumstance where we might really benefit from seeing a few more results.
Now, let's see a bit of the computation hiding behind the statements we just made. Suppose the chance that the Red Sox will beat the Cardinals in any given game is p. (For the moment, we ignore the difference between Pedro Martinez and Tim Wakefield and take p to be the same for each game.) If the Red Sox are better than the Cardinals, p will be bigger than 1/2; otherwise, p is smaller than 1/2. Let's suppose p = .55, which is to say the Red Sox are quite a bit better than the Cardinals. After one real World Series game, there's a 55 percent chance the Red Sox are up 1-0 and a 45 percent chance the Cardinals are up 1-0. The chance the Red Sox will sweep the first two games is .55 x .55 = .3025, while the chance the Cardinals go up 2-0 is .45 x .45 = .2025. The Series will be 1-1 if the Red Sox go up 1-0 and the Cardinals win Game 2 (probability .55 x .45), or the Cardinals go up 1-0 and the Red Sox respond with a win (probability .45 x .55). So, we get .55 x .45 + .45 x .55 = .495 as the probability that the series stands at 1-1 after two games.
Continuing this type of analysis, we find that the probabilities break down as follows: