As in years past, I again this year built a model of the most recent edition of U.S. News & World Report's law school rankings. Apart from the intrinsic charms of the exercise (not inconsiderable to a rankings geek), it offers useful insights into how USN&WR assesses law schools. Last year, for instance, comparing the model's results to USN&WR's published scores uncovered some troubling errors. What did reverse engineering the 2008 rankings uncover?
This year found me especially eager to find out how well my model would track USN&WR's law school rankings. After many sleepless hours massaging the data, I boarded a flight, squeezed in between two beefy road warriors, downed some coffee, and began muttering over my laptop. It doubtless startled my neighbors when, some time later, I thumped the return key, leaned back in my seat, raised my fists, and exclaimed, "YES!"
The above chart, which compares USN&WR's scores to those generated by my model, explains my triumphant glee. As you can see from comparing similar charts from 2005 and 2006, this year's model proved the most accurate, yet. That alone sufficed to put me, literally, in geek heaven.
That close fit between the published rankings and the model heralds good news more generally, too, though. It indicates that USN&WR ranked the top two tiers of law schools using data not grossly different from the data collected by the American Bar Association. My model uses the ABA data, you see. The congruence between the two sets of scores thus means we have less reason to worry this year than in years past that a law school gamed the rankings by telling USN&WR something different from what it told the ABA, or that USN&WR somehow mishandled the data.
If you like the USN&WR rankings, that should make you happy. And even if you don't much like them, you surely want the rankings to stick to the facts. (Or perhaps I should say, given doubts about many of the measurements that USN&WR uses, you surely don't want its rankings to reflect non-systematic errors.)
[Crossposted to MoneyLaw.]
Wednesday, July 18, 2007
Subscribe to:
Post Comments (Atom)
4 comments:
Is it true that median LSAT and median GPA are biased toward smaller schools? While it's true that smaller schools need to attract fewer above-X students to boost their median above X, it seems that smaller schools also probably have fewer applicants, have less room for below-X students, etc. It seems like these measures are biased toward more selective schools -- which is surely the intent -- rather than specifically toward smaller ones.
(To clarify my previous comment: that's in response to a claim made in the leiterrankings.com page you linked to near the end of your post.)
So, does this mean you are going to post the third and fourth tier in order?
Ran: I have not tried to confirm Leiter's claim and, like you, can imagine alternative theories about the relationship between school size, LSAT, and GPA. It woud be interesting to run some regressions--and not that hard, either, I suppose.
Anon: I plan to convey the same sort of data I did last year: z-scores for the top two tiers and a graph of the overall distribution for all schools. I'm not comfortable publishing my scores for the third and fourth tiers--leastwise, not in a way that allows people to identify those schools.
Post a Comment