I earlier reported that the score Arizona State University School of Law won in the USN&WR rankings varied significantly from the score my model generated for the school. That caught the attention of Prof. Douglas Sylvester, a member of ASU's faculty and a fellow USN&WR rankings geek. Doug and I shared notes extensively last year, at which time he and I were both trying to model the rankings. His sharp eye helped me correct my model, then, and this year he's come to my aid again.
Puzzled by my model's results, Doug requested from me, received, and reviewed the data I'd plugged in for ASU. He found a typo. Due to a scanning error, my model had attributed to ASU a full-time enrollment of 850 students in 2005. It should have been 650. Once I'd corrected that number and made a few other adjustments (see below), the model ended up giving ASU a score of 46.61—only 2.90% below the score of 48 that USN&WR awarded the school.
Figuring that a bad scan might have affected other schools' enrollment figures, too, I double-checked them all. I discovered that the University of Hawaii School of Law's enrollment figures had suffered a similar (but much bigger!) error, and so made a similar correction. You can revisit my prior post about the model's accuracy to see which schools I now peg as exhibiting the largest differences between their scores in the USN&WR rankings and my model thereof. Especially since I've suggested that hanky-panky might explain large differences between the scores USN&WR awards and that those my model does, those revisions to the model's treatment of ASU and Hawaii bear notice.
One important lesson from the corrections that Doug's review inspired: As I said earlier, any time that my model generates results different from those published by USN&WR, we should in the first place suspect my model. Consider its source! I cannot offer perfection, alas; I can only try my best and make corrections as necessary.
Another, less profound lesson: Mere rounding effects may explain a fair amount of the divergence between a school's score in USN&WR's rankings and my model thereof. Consider ASU in particular. The model now gives it a score of 46.61—just enough to round up to 47. USN&WR gave ASU a score of 48. Suppose, though, that USN&WR generated that number by rounding up from 47.50. In that event, USN&WR would really have given ASU a score only 1.88% more than the model did—hardly enough to worry about.
We could almost explain away a variation that small by assuming that the model sorely misestimated the cost-of-living indices that USN&WR applies to part of its measure of law schools' overhead expenditures/student. Recall, after all, that USN&WR does not disclose the cost-of-living indices it uses, and that my model relies on a home-cooked substitute. Even apart from the usual, all-too-human errors, then, rounding effects and missed guesses at the cost-of-living index will ensure that my model never quite duplicates USN&WR's results.
I can only keep trying to improve my model. I thus welcomed Paul Varriale's observation that my model appeared especially apt to give higher scores than USN&WR did to schools in high-cost urban areas. He suggested, quite plausibly, that my cost-of-living figures might need adjustment. I've since gone back into the model and slightly eased the cap I'd imposed on the cost-of-living indices it uses.
The last edit worth mentioning: As documented earlier, I now have persuasive reasons to believe both that USN&WR generated its rankings using the median LSAT and GPA of Baylor's fall full-time 2005 matriculants, only, and that USN&WR meant to use the median LSAT and GPA of all full-time students Baylor matriculated in 2005. Ditto for Florida. I think it thus behooves me to plug Baylor's and Florida's fall numbers into the model, run my curve-fitting algorithm to try to get the model's scores to match USN&WR's as closely as possible, and then plug in the numbers that USN&WR intended to assign those schools. That reflects only the facts, as I see them, about what the USN&WR rankings were and what USN&WR intended them to be. It says nothing about whether USN&WR ranks law schools fairly or how USN&WR ended up measuring Baylor's and Florida's median LSATs and GPAs the way that it did.
Earlier posts about the 2007 USN&WR law school rankings:
- Change to U.S. News Law School Rankings Methodology
- "Financial Aid" Revised in U.S. News Methodology
- How USN&WR Counts Faculty for Rankings
- Whence Come the LSATs and GPAs Used in the Rankings?
- Gains and Losses Due to USN&WR's Use of Reported Median LSATs and GPAs
- How to Model USN&WR's Law School Rankings
- Why to Model USN&WR's Law School Rankings
- The ABA and USN&WR's Law School Rankings
- Accuracy of the Model of USN&WR's Law School Rankings
- Z-Scores in Model of USN&WR's Law School Rankings
- Further Tinkering with Model of USN&WR Law School Rankings
- Baylor's Score in the USN&WR Law School Rankings
- What USN&WR Asks About Law Schools' LSATs and GPAs
- USN&WR and Baylor on that School's Data
- The University of Florida's Score in the USN&WR Rankings
- Baylor Explains the Data it Reported for the USN&WR Rankings