Crapshoot

Cooking the School Books

How U.S. News cheats in picking its “best American colleges.’

Click here for a response to this article by the editors of U.S. News.

According to the annual “America’s Best Colleges” issue of U.S. News & World Report, published Aug. 30, the best college in the United States is the California Institute of Technology. This was dramatic, since Caltech, while highly regarded, is not normally thought of as No. 1. Last year Caltech was rated No. 9, while the top spot was an uninteresting three-way tie among Harvard, Yale, and Princeton.

“Why does U.S. News rank colleges?” asks U.S. News. The “simple answer,” the magazine says, is, “We do it to help you make one of the most important decisions of your life.” Perhaps. Another simple answer is that the annual college rankings (and similar rankings of graduate schools and hospitals) are lucrative and influential unlike anything else the No. 3 newsmag does. Newsstand sales are almost double those of a normal issue, and a paperback-book version sells a million copies. Colleges brag or complain loudly about their scores, enhancing the ’Snooze either way.

Whatever their validity as measures of academic excellence, the annual rankings are a brilliant gimmick for U.S. News. But there’s a problem. A successful feature like this requires surprise, which means volatility. Nobody’s going to pay much attention if it’s Harvard, Yale, and Princeton again and again, year after year. Yet the relative merits of America’s top universities surely change slowly, if at all. Naturally, U.S. News does not just make up its ratings. It uses a weighted average of 16 numerical factors such as average class size, acceptance rate (fraction of applicants who are admitted), and amount of alumni giving. Trouble is, any combination of these factors just isn’t going to change enough from year to year to keep things interesting.

So how on earth can U.S. News explain Caltech’s one-year rise?

The magazine tries to deny that there’s anything odd about a college improving so quickly. The “best colleges” story argues: “Caltech has always been within striking distance of the top of the chart. In 1989, Caltech was the No. 3 school, ahead of Harvard. … Last year, Caltech had the fourth-highest score among national universities.” The first assertion is irrelevant: We’re not interested in the 10-year rise from third but rather in the one-year rise from ninth. The second assertion is technically true but practically dishonest: Caltech had the “fourth-highest score” last year only because there were two three-way ties and one two-way tie among the eight schools that beat it.

But the real reason Caltech jumped eight spaces this year is that the editors at U.S. News fiddled with the rules. The lead story of the “best colleges” package says that a change in “methodology … helped” make Caltech No. 1. Buried in a sidebar is the flat-out concession that “[t]he effect of the change … was to move [Caltech] into first place.” No “helped” about it. In other words, Caltech didn’t improve this year, and Harvard, Yale, and Princeton didn’t get any worse. If the rules hadn’t changed, HYP would still be ahead. If the rules had changed last year, Caltech would have been on top a year earlier.

(In fact, if the U.S. News criteria are taken seriously, and if they held steady, Caltech may actually have slipped in quality this past year. Most indicators did not change compared to last year. But graduation rate, number of classes with fewer than 20 students, and percentage of faculty members who work full-time actually declined. Only two indicators showed small improvements: percentage of accepted students in top 10 percent of their high-school classes went from 99 percent to 100 percent [big deal!], and Caltech’s acceptance rate fell from 23 percent to 18 percent.)

U.S. News denies that it changes the rules–as it does every year–simply to change the results. Robert Morse, U.S. News’ statistical guru, explained to me that this year’s ranking procedures are an “improvement” over last year’s. Doesn’t that imply, I said, that last year’s rankings were inferior? And shouldn’t U.S. News apologize to anyone who made “one of the most important decisions of your life”–possibly turning down Caltech for Princeton–based on rankings the magazine itself now regards as inaccurate? Morse replied that he hadn’t said the earlier ratings were inferior. But if something improves, I pressed him, doesn’t that mean that it was less excellent before the improvement? Morse grudgingly allowed that I was free to make that inference.

I can’t prove that U.S. News keeps changing the rules simply in order to change the results. But if not, U.S. News ought to shy away from horse-race headlines such as “Caltech Comes out on Top.” A more honest summary might be “We Finally Realize That Caltech Is Tops.” Or “Caltech on Top (Until We Fiddle With Rules Again).”

So, how did Caltech come out on top? Well, one variable in a school’s ranking has long been educational expenditures per student, and Caltech has traditionally been tops in this category. But until this year, U.S. News considered only a school’s ranking in this category–first, second, etc.–rather than how much it spent relative to other schools. It didn’t matter whether Caltech beat Harvard by $1 or by $100,000. Two other schools that rose in their rankings this year were MIT (from fourth to third) and Johns Hopkins (from 14th to seventh). All three have high per-student expenditures and all three are especially strong in the hard sciences. Universities are allowed to count their research budgets in their per-student expenditures, though students get no direct benefit from costly research their professors are doing outside of class.

In its “best colleges” issue two years ago, U.S. News made precisely this point, saying it considered only the rank ordering of per-student expenditures, rather than the actual amounts, on the grounds that “expenditures at institutions with large research programs and medical schools are substantially higher than those at the rest of the schools in the category.” In other words, just two years ago, the magazine felt it unfair to give Caltech, MIT, and Johns Hopkins credit for having lots of fancy laboratories that don’t actually improve undergraduate education.

E ach of U.S. News’ criteria can generate a quibble like this one. But there is a larger philosophical flaw in the “best colleges” rankings. Consider this analogy: Suppose you wanted to rank baseball teams. You might choose some plausible criteria such as players’ lifetime batting averages and salaries, the coaches’ years of professional experience, and so on. To decide whether these criteria were valid, and what relative weights to give them, you would look at the figures for winning and losing teams of the past. Because you know which teams are successful before you begin your analysis–those that win–you can use mathematics to identify similarities among those winning teams.

But with the U.S. News rankings there is no objective way to know which schools are winners before you begin your analysis. In fact, determining the winners is the point of the exercise. So you sit around and brainstorm about whether faculty resources (class size, faculty salaries, etc.) or student graduation rates, for instance, are more important to educational “quality.” Right now, U.S. News gives the two characteristics equal weight, which seems reasonable. But if you told me that faculty resources are twice as important as student graduation rates, that would seem reasonable too.

Mel Elfin, the retired U.S. News editor who more or less created the current college rankings, explained to Lingua Franca: “We’ve come up with a list that underscores intuitive judgments. We did not set out to underscore [those] judgments; we set out with a methodology. That it wound up this way is to me both a justification and a discovery that we’re on the right track.” This is a masterpiece of circular logic. Elfin is saying: 1) We trust our methodology because it confirms our intuition; and 2) we are confirmed in our intuition because it is supported by our methodology.

And the truth is that the rankings’ success actually depends on confounding most people’s intuition. For example, by declaring that Caltech is superior to Harvard, Yale, Stanford, Princeton, MIT, and so on. And why should people find that so hard to believe? Maybe because you told them the opposite just last year.