I say, "try to describe," because I face a bit of a quandary. I am not at liberty to disclose the median LSATs and GPAs that schools reported to the American Bar Association, which the ABA makes available subject only to certain restrictions. Unhelpfully—and inexplicably—USN&WR does not report the median LSATs and GPAs that schools reported to it. Instead, it reports only their 25th and 75th percentile LSATs and GPAs.
Comparing the ABA, USN&WR, and Law School Admission Council data (made available via the ABA-LSAC Official Guide to ABA-Approved Law Schools), discrepancies soon become apparent. I have not dwelled on these puzzles, since in each case I've simply stuck with the medians that schools reported to the ABA. As a consequence, however, my model evidently assigns a few schools different median LSATs than those used by USN&WR for its rankings. (GPAs didn't seem to suffer the same problems.) I'll illustrate that effect in a later post, when I compare the USN&WR rankings with those generated by my model. For present purposes, I simply want to observe that I've excluded three schools from the following analysis.
Which of the remaining schools had the largest differences between the median LSATs that they reported to the ABA and USN&WR—the numbers actually used in the 2007 rankings—and the median LSATs that USN&WR would have used in the rankings had it stuck to the methodology used the prior year? Southern Methodist University Law School gained the most by dint of that methodological change. CUNY–Queens College Law School lost the most due to it. A similar analysis of the impact wrought by the change to the methodology USN&WR applies to GPAs shows that it most benefited the University of Toledo School of Law and most harmed the University of Louisville (Brandeis) Law School.
That merely describes the differences that USN&WR's change from calculated to reported median LSATs and GPAs worked to the data that goes into the rankings—not how those differences affected the rankings. To get a rough estimate of that effect (the one that, truth be told, most law schools care about more) we might plug the "old-fashioned," calculated median LSAT and GPA numbers of any of the above schools into a model of the USN&WR rankings, generate a new ranking score for that schools, and compare the result with the school's score in the extant rankings. Herewith some select results:
|School||Change in 2007|
|Southern Methodist U. (TX)||+.44|
|CUNY–Queens Coll.||- 1.75|
|U. of Toledo||+ 2.9|
|U. of Louisville (Brandeis)||-1.57|
As with the similar calculations I performed earlier, to illustrate the impact on the rankings of the change in the Stu/Fac methodology, I caution that the above figures reflect mere estimates. Most notably, they rely on the accuracy of my model of the USN&WR rankings. In a forthcoming post, I will describe how I created that model and document its accuracy.
Earlier posts about the 2007 USN&WR law school rankings: