This year, according to U.S. News & World Report, Princeton is the best university in the country and Caltech is No. 4. This represents a pretty big switcheroo—last year, Caltech was the best and Princeton the fourth.
Of course, it's not as though Caltech degenerated or Princeton improved over the past 12 months. As Bruce Gottlieb explained last year in Slate, changes like this come about mainly because U.S. News fiddles with the rules. Caltech catapulted up in 1999 because U.S. News changed the way it compares per-student spending; Caltech dropped back this year because the magazine decided to pretty much undo what it did last year.
But I think Gottlieb wasn't quite right when he said that U.S. News makes changes in its formula just so that colleges will bounce around and give the annual rankings some phony drama. The magazine's motives are more devious than that. U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that Harvard, Yale, or Princeton (HYP) ended up on top. U.S. News changed the rankings back because those editors and statisticians are now gone and the magazine wanted HYP back on top. Just before the latest scores came out, I wrote an article in the Washington Monthly suggesting that this might happen. Even so, the fancy footwork was a little shocking.
The story of how the rankings were cooked goes back to 1987, when the magazine's first attempt at a formula put a school in first that longtime editor Mel Elfin says he can't even remember, except that it wasn't HYP. So Elfin threw away that formula and brought in a statistician named Robert Morse who produced a new one. This one puts HYP on top, and Elfin frankly defends his use of this result to vindicate the process. He told me, "When you're picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it's not right."
For the next decade, Elfin and Morse essentially ran the rankings as their own fiefdom, and no one else at the magazine really knew how the numbers worked. But during a series of recent leadership changes, Morse and Elfin moved out of their leadership roles and a new team came in. What they found, they say, was a bizarre statistical measure that discounted major differences in spending, for what seemed to be the sole purpose of keeping HYP at the top. So, last year, as U.S. News itself wrote, the magazine "brought [its] methodology into line with standard statistical procedure." With these new rankings, Caltech shot up and HYP was displaced for the first time ever.
But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people's nonscientific prejudices. Last year's rankings failed the second test. There aren't many Techie graduates in the top ranks of U.S. News, and I'd be surprised if The New Yorker has published a story written by a Caltech grad, or even by someone married to one, in the last five years. Go out on the streets of Georgetown by the U.S. News offices and ask someone about the best college in the country. She probably won't start to talk about those hallowed labs in Pasadena.
So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top. According to the magazine: "[W]e adjusted each school's research spending according to the ratio of its undergraduates to graduate students ... [and] we applied a logarithmic adjuster to all spending values." If you're not up on your logarithms, here's a translation: If a school spends tons and tons of money building machines for its students, they only get a little bit of credit. They got lots last year—but that was a mistake. Amazingly, the only categories where U.S. News applies this logarithmic adjuster are also the only categories where Caltech has a huge lead over HYP.
The fact that the formulas had to be rearranged to get HYP back on top doesn't mean that those three aren't the best schools in the country, whatever that means. After all, who knows whether last year's methodology was better than this year's? Is a school's quality more accurately measured by multiplying its spending per student by 0.15 or by taking a logarithmic adjuster to that value? A case could also be made for taking the square root.
But the logical flaw in U.S. News' methodology should be obvious—at least to any Caltech graduate. If the test of a mathematical formula's validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It's just for show. And if you fiddle constantly with the formula to produce the result you want, it's not even good for that.
U.S. News really only has one justification for its rankings: They must be right because the schools we know are the best come out on top. Last year, that logic fell apart. This year, the magazine has straightened it all out and HYP's back in charge—with the help of a logarithmic adjuster.