Bill James urges his colleagues to boycott the BCS.

Previously published Slate articles made new.
Jan. 6 2010 4:11 PM

Boycott the BCS!

A statistical analyst takes a stand against college football's perverse, irrational Bowl Championship Series.

(Continued from Page 1)

This is reflected in the fact that the rankings are routinely described as "computer" rankings. Computers, like automobiles and airplanes, do only what people tell them to do. If you're driving to Cleveland and you get lost and wind up in Youngstown, you don't blame your car. If you're doing a ranking system and you wind up with Murray State in western Kentucky as the national football champion, you don't blame the computer.

There are several things that a ranking system could do. It could rank teams based on their accomplishments over the course of the season—whom they played and whom they beat—or it could rank them based on the probability that they would win against a given opponent. It could rank teams based on how they have played over the course of the season, including perhaps in some early-season games against teams that were not quite sure who their quarterback was, or it could rank them based on how strong they are at the end of the season. It could rank the teams based on consistency, or it could rank them based on dominance.

Advertisement

Which of these is the goal of the BCS system?

Nobody has any idea. It's never been debated. There is a perception among the people who are in charge of this monkey that if you just turn the rankings over to a computer, the computer will figure those things out. The reality is that it can't. It is very difficult to objectively measure anything if you don't know what it is you are measuring.

2. There is no genuine interest here in using statistical analysis to figure out how the teams compare with one another. The real purpose is to create some gobbledygook math to endorse the coaches' and sportswriters' vote.

Throughout the 11 years of the BCS, whenever the "computer" rankings have diverged markedly from the polls, the consensus reaction has been, we have to do something about those computers. And they have; whenever the computer rankings don't jibe with the "human polls," they fix the computers. In 2000, the computers didn't pick Miami as one of the top two teams. The coaches and sportswriters thought Miami should have been there, so they changed the computer system.

In 2001, according to Stern, "the BCS selected once-beaten Nebraska over once-beaten Oregon despite the fact that Nebraska had lost badly in their last regular season game. Popular perception this time was that the computer ratings paid too much attention to the large margin of victory in Nebraska's early season triumphs while not putting enough value on Oregon's steady but unspectacular performances." What did they do? Fix the computers. In 2003, the computer rankings once more disagreed with the coaches' and the fans' and the writers' perceptions, and so, once more, the computer rankings were fixed to prevent a recurrence of whatever the problem was.

3. The ground rules of the calculations are irrational and prevent the statisticians from making any meaningful contribution.

One of the polls used in the BCS system is the Peter Wolfe rankings—no Prokofiev jokes, please. According to Wolfe's Web site, "A significant but hard-to-measure factor in comparing teams is sportsmanship. Running up the score is generally looked on as evidence of bad sportsmanship, behavior which should not be encouraged or rewarded. With this in mind, the BCS has chosen computer systems that use only won/loss data (and not scoring margin) to compute ratings. We have developed such a system that provides reasonable results."

I don't question that Wolfe is a good man doing the best he can within the BCS strictures, but this is childish pablum. The prohibition against using point differentials to rank teams, of course, dates from the Nebraska-in-2001 experience, when those dirty Cornhuskers beat Troy State, Rice, Missouri, Iowa State, Baylor, and Kansas all by 28 points or more. The BCS reacted to this by requiring the computer rankings to treat a 56-7 victory the same as a 20-17 contest.

This is very much like a situation in which a surgeon leaves a scalpel in a patient, and the hospital reacts by prohibiting surgeons from using scalpels. I understand that the point of the game is to win, not to score as many points as possible, and I certainly can understand football coaches saying, "We want a system that emphasizes winning and diminishes the importance of the score." That's reasonable.

But saying, "We're not going to pay any attention to the score of the game, and, by the way, you can't pay any attention to whether it is a home game or a road game, either"—that's just stupid, Gomer. For football coaches to impose a rule like that on the statistical analysts is very much like the AD, frustrated by seeing long passes intercepted, telling the football coach he can't throw passes longer than 10 yards.

Look, guys, none of us are claiming that the statistical analysts understand the game of football as well as the football coaches do, or that our analysis should take precedence over the informed opinions of experts. I'm not saying that at all.

But at the same time, statistical analysts are professional people. Heck, some of us are almost as smart as football coaches—high-school football coaches, anyway. There is no point in our participating in the process if you're going to tell us how to do the analysis based on your ignorant, backward-looking prejudices. Run your own damned computers.

  Slate Plus
Slate Picks
Dec. 19 2014 4:15 PM What Happened at Slate This Week? Staff writer Lily Hay Newman shares what stories intrigued her at the magazine this week.