Friday, September 15, 2006

Validity of SSRN's "# of New Papers" Measure

I earlier related the kerfuffle that followed Chapman Law School's acknowledgement of its recent success in the SSRN's "# of New Papers" measure. A skeptic might conclude that Chapman drew criticism primarily because it "acted above its station," daring to trumpet that it had bested the champions of USN&WR's law school rankings. Perhaps so. I find more interesting, however, the claim that Chapman erred in describing the SSRN's "# of New Papers" measure as a "key scholarly output ranking." Outlandish puffery or accurate reporting?

Bernard Black and Paul Caron consider the utility of SSRN's measures in their paper, "Ranking Law Schools: Using SSRN to Measure Scholarly Performance," 81 Ind. L. J. 83 (2006). They argue that both the number of posted papers and the number of downloads “play a valuable role in the rankings tapestry," offering new and potentially more accurate alternatives to the USN&WR rankings.

Black and Caron observe, moreover, that measures of the number of papers posted to SSRN arguably offer a more accurate measure of genuine scholarship than measures of the number of papers downloaded from the network. Why so? Because the latter offers more opportunities for gaming the system. Though I again admit that I'm not purely disinterested in the matter, I thus conclude that the SSRN's "# of New Papers" does plausibly qualify as a key measure of scholarly output.

Granted, none of the SSRN's rankings work perfectly. In particular, as I observed in my earlier post, the "# of New Papers" measure could stand some improvement. Black and Caron put it this way: “The SSRN measures have important field and other biases. Still, they offer up-and-coming schools a way to ‘show their stuff,’ long before the US News rankings respond to the school’s improvement. That alone is an important contribution.”

[Crossposted to MoneyLaw.]

No comments: