I've pretty well finished reverse engineering the way that U.S. News & World Report (USN&WR) ranks law schools, a project I've been working on for quite some time. I'll spare you the gruesome details about exactly how I built my model and instead simply show you a snap-shot of the results: A comparison between the scores that USN&WR gave to each of the top 102 law schools in its latest rankings versus the scores generated by my model.
As you can see, my recalculated scores generally came within a few percentage points of those assigned by USN&WR. I attribute that minor drift to the effects of the one bit of data that I could not recreate: the cost-of-living adjustment that USN&WR applies to some of the figures it tracks under the heading of "expenditures per student." I worked up a proxy, but it evidently doesn't track USN&WR's cost-of-living adjustments perfectly.
There remains just one glaring mismatch between USN&WR's scores and mine. See that dip near the upper left end of the chart? It shows that USN&WR assigns a notably higher score—9% higher—to the University of Pennsylvania School of Law (Penn) than my model suggests it should. The cost-of-living adjustment cannot explain that much of a difference. I have a another, more troubling explanation.
If you'll allow me to gloss over some gory details, it boils down to this: Although Penn spends about as much as its neighbors in the rankings, it apparently spends its money rather differently from them. So, at least, it told the ABA, whence I got the data for my model. So what? Well, how a school classifies its expenditures makes big difference in the USN&WR rankings. Hence my model's comparatively low score for Penn.
Why did USN&WR give Penn such a high ranking, then? That's the puzzle. I surmise that the classification of Penn's expenditures somehow got "fixed" for the USN&WR rankings. How might that have happened? I don't know. I will observe, however, that it does not look as if Penn simply made an error in reporting its expenditures to the ABA and then got USN&WR to correct that mistake, since Penn has stuck to the same basic classification scheme for at least two years running.
I don't want to make too big a fuss about this puzzle. Penn seems like a quality school, and goodness knows I'm not going to present the USN&WR as the gospel truth in assessing law schools. But it perhaps bears noting that my model would drop Penn from its current slot, at 7, down to about 15. And, interestingly, that roughly corresponds to the ranking that Brian Leiter gives the school in his assessment of scholarly impact and his assessment of law school teaching.
Of course, a lot of people would be interested if some sort of hanky-panky explained this puzzle about how Penn did so much better in the USN&WR rankings than it seems it should have. I've already pointed out the discrepancy to USN&WR. I didn't ask for a response, however, and have not gotten one. I'm not a muck-raker, and I repeat that I don't know why Penn appears higher in the USN&WR rankings than it does in my (apparently pretty accurate) model. I present this merely as an intriguing puzzle, one that probably illustrates the vagaries of law school ranking methodologies than anything else.