Polls and Bowls
Why college football still can't get it right.
It used to be so simple. Beginning in 1936, the Associated Press would poll a group of leading sportswriters and broadcasters on who were the best teams in college football; United Press International, starting in 1950, would ask the same of the coaches. At the end of the season, the team with the most points was the national champion—that is, the unofficial national champion, since there was no provision by the NCAA for a champion in college football. The bowl games weren't set up to pick a champion; for the most part, they were simply considered postseason rewards. It wasn't until the 1970s that both AP and UPI finally agreed to postpone their final votes until after the bowls.
The whole system—to the degree that you could call it a system—didn't exist to resolvecontroversy but to cause it. As AP guru Alan J. Gould said in an interview, "It was a case of thinking up ideas to develop interest and controversy between football Saturdays. Papers wanted material to fill space between games. That's all I had in mind, something to keep the pot going. Sports then was living off controversy, opinion, whatever. This was just another exercise in hoopla." And one that worked very well. That the season sometimes ended in controversy with fans of schools in different parts of the country all arguing about who should be No. 1 didn't detract from the interest. It enhanced it.
Now we have the Bowl Championship Series, the inevitable result of, in the words of former coach Bill Curry, "Trying to fix something that wasn't broken in the first place."
Real college football fans never complained about the lack of a playoff system. They might have complained now and then when their team didn't win, but that's a different matter. The ones who harped for college football playoffs were mostly from the northeastern media, writers who saw college football as an appendage to the pro game and couldn't understand why all the hicks in Norman, Tuscaloosa, South Bend, and College Station couldn't see it that way, too.
So, now we have the BCS, whose major contribution to college football has been to erode interest in all bowl games except the one that has the championship, while much of the old controversy remains intact. Last year, the BCS screwed up by devising a system that kept perhaps the best team in the country, the University of Southern California Trojans, out of the title game. Southern Cal had finished the regular season with a 10-2 record against one of the toughest schedules in college football history.
Last year I ranked teams for my column in the Wall Street Journal with Professor George Ignatin of the University of Alabama in Birmingham and our computer, Mad Max. Going into the bowl games, Max gave the Trojans an edge of nearly 10 points over eventual national champion Ohio State, even though Ohio State was unbeaten. Why? Simple: Southern Cal had faced nine teams whose collective won-lost records were, when not playing Southern Cal, 75-26. They lost to two of them, both on the road—to Kansas State by seven points and to Washington State by three in overtime. SoCal beat the other seven bowl-worthy teams by a staggering total of 175 points—25 points per game.
Yet the Trojans got no credit from the BCS for their margin of victory, an absolutely absurd proposition when you consider that Team A and Team B could play the same schedule and have the same won-lost record, Team A could win each game by one point and Team B could win each game by 21 points, and, by BCS logic, have exactly the same ranking.
This season, the Trojans have come close to getting shafted again. USC isn't as good this year as last—no one is disputing that undefeated Oklahoma deserves to be ranked No. 1 going into the bowl games—but they are still very, very good, and by nearly any solid power rankings I've seen (including George Ignatin and Mad Max), they are clearly the No. 2 team in the nation. In fact, they were clearly No. 2 before Saturday's games, though the defending champion Ohio State Buckeyes had snuck into the spot in the BCS. Ohio State earned the ranking by squeaking out a 16-13 win over conference opponent Purdue, the third game this season in which the Buckeyes had failed to score a touchdown on offense. Southern Cal was punished for beating conference opponent Arizona 45-0.
If the Buckeyes had defeated archrival Michigan last Saturday, they probably would have played Oklahoma for the national championship in the Sugar Bowl on January 4. They didn't win, of course: They got clobbered 35-21 by Michigan while Southern Cal thrashed UCLA 47-22. A victory that, as we go to press, looks to put Southern Cal back in the BCS championship picture.
The question, though, is why an Oklahoma-Southern Cal title match was dependent on Saturday's games. There wasn't an odds-maker in the country who wouldn't have favored the Trojans over the Buckeyes if they had met head-to-head. For that matter, there wasn't an odds-maker in the country who didn't seem to think that the Wolverines wouldn't beat the Buckeyes by at least seven points, which begs the obvious question as to why the BCS, the New York Times, USA Today'sJeff Sagarin, and others whose rankings systems are included in the BCS ratings would go to such elaborate lengths to elevate Ohio State.
The answer is that the Times and Sagarin and the others did not actually create those rankings systems; they're following the guidelines given to them by the BCS through the NCAA, and those guidelines are riddled with compromises and logical absurdities. For instance, try and figure the BCS's formula for "Quality Win Component": "The quality win component will reward to varying degrees teams that defeat opponents ranked among the Top 10 in the weekly standings. The bonus point scale will range from a high of 1.0 points for a win over the top-ranked team to a low of 0.1 for a victory over the 10th-ranked BCS team. The final BCS standings will determine final quality win points. Quality win points are based on the standings determined by the subtotal. The final standings are reconfigured to reflect the quality win point deduction."
Got that? Personally, I think any team with a player who can figure out what that means deserves the national championship right there.
Two years ago, the BCS eliminated margin of victory as a component in its rankings. The reason for their concern was obvious: They wanted to prevent powerful teams from running up the score on weaklings just to increase their standing in the BCS rankings. This was an admirable sentiment, but the BCS's solution to the problem was entirely unnecessary, as computer programmers long ago figured out a method for making sense of runaway scores. "It's called 'collapsing,' " says Ignatin. "If a team is favored to win by, say, 20 points, and they exceed expectations and win by, say, 30, they deserve some credit for that. But if they pile it on and run up the score by, say, 60 points, it's not a true reflection of their real strength, so programmers 'collapse' the core after it reaches about 10 points above the anticipated margin of victory. That way no team can get too much credit for one or two lop-sided wins." Southern Cal, though, got no credit for six consecutive wipeouts in which they scored 43 or more points.
Through no fault of their own, Southern Cal's schedule this year is weaker than last year's, weaker, in fact, than Ohio State's. But only a fool, an ideologue, or an NCAA executive thinks that how much you beat a team by doesn't tell you something.