Here are the rankings based solely on Peer Evaluation (25%), Judge/Lawyer Evaluation (25%), LSAT 25-75 midpoint (30%) and GPA 25-75 midpoint (20%). I used the LSAT score rather than the LSAT %ile, because using percentiles like US News does makes the LSAT meaningless for the high-ranked schools. I used the mid-point of the 25-75 because it's easier to conceal something in a median (you could have very low tails).
1. Yale 100
2. Harvard 98
3. Stanford 94
4. Columbia 92
5. NYU 89
5. Chicago 89
7. UVA 86
8. Michigan 85
8. Berkeley 85
10. Penn 83
10. Duke 83
12. Georgetown 82
13. Cornell 80
13. Northwestern 80
15. UT 75
16. Vanderbilt 74
16. UCLA 74
18. USC 68
The pegs I used were 1.5-4.8, 1.5-4.8, 149-173.5 and 2.9-3.87. The lower pegs for peer evaluation may not be exact, but are close enough to not make a difference (and there's no reason any particular value would be more "correct" anyway)
Why the other categories are garbage:
Employment %s don't say anything about the quality or type of the job and, at least among the top schools, just reflect gaming and a few individual students' choices rather than the school's ability to get you a job (Berkeley does not place better than every other school, George Mason doesn't place better than Duke).
Acceptance rate means absolutely nothing - yes, a lower acceptance rate may allow a stronger class, but that is already accounted for in the LSAT/GPA.
Bar passage ratio gives an unfair advantage to schools in states with low overall passage rates - if your state has a 60% rate, your 90% is a 1.5 ratio, so the school in a state with an overall rate of 80% can't compete, no matter how good they are.
Library volumes isn't worth anything significant in the rankings anyway, sheer size doesn't account for quality, duplicates and age, and the era of the internet makes it less important.
Student/faculty ratio is unfair to larger schools, since it doesn't account for economies of scale. Also shared LLM/JS.D faculty are counted, while LLM/JS.D. students are not. Since it doesn't account for hours teaching, it may not say anything about actual class size.
Expenditures per student is completely messed up. It doesn't account for economies of scale, so larger schools are at a disadvantage (every school needs basic infrastructure, one copy of each important book, subscription etc. but just because you are twice as large doesn't mean you need twice as much). It assumes constant marginal returns (twice as much spending is twice as good) which is untrue pretty much always. Perhaps most importantly, it depends largely on accounting practices and gaming. You can define the same spending in different categories and have different effects on your ranking, and some schools include types of spending which other schools don't (e.g. their portion of the whole university's utilities). Just to illustrate how it doesn't reflect actual quality: for two schools, identical in every way, but one owns a building, while one rents, the renter would be higher-ranked, because it is spending more per student (even though it is providing exactly the same value). It also counts spending on all students, but only counts JD students for the divisor, so the larger your LLM and JSD programs, the more of an advantage you have, since all spending on them is complete gravy.
Given how much these rely on the peer and lawyer evaluation scores, it's worth noting that these are fairly questionable (although slightly less so when we are talking about only the top schools). The response rate is quite low and not allowing fractional scores may create some undue variability. The small differences are likely not too reliable (NYU was better than Michigan last year, but suddenly it became worse?).
GPA/LSAT medians do not reflect other non-numerical variation in student quality. A school which values an MIT electrical engineering GPA more than a Florida State sociology GPA would do worse than one which didn't (maybe slightly mitigated in the long run through lawyer/judge evaluation). Upward trends, work experience and extracurriculars are similarly absent.
What could be added:
The final pared-down ranking doesn't directly include the quality of teaching (peer evaluation may capture some of this) or the placement (class strength and lawyer/judge evaluation probably captures much of this). I decided to stick to US News data, though, because it's simple, uniform and available.
There are some decent measures of faculty quality out there, which could be added, but quality of scholarship may not reflect quality of teaching, and there may be some weird effects with size since these are per capita measures.
There isn't any really reliable data on placement available, imo, and that would be the single most valuable thing to have (but is also very hard to calculate, given self-selection).
Last edited by Hitachi on Thu Mar 27, 2008 12:20 am, edited 1 time in total.