Icechicken's Rankings: What if we only cared about inputs?
Posted: Mon Jan 22, 2018 12:36 pm
Rankings fascinate me. If we relied completely on GPA/LSAT inputs to evaluate the relative quality of law schools, what would happen?
I think this approach has a lot to commend it. GPA and LSAT percentiles are objective criteria that schools can't fake and do appear to be the strongest indicators we have of law-school performance. This cuts out the noise of statistics which schools can manipulate, such as employment outcomes, and positive-reinforcement mechanisms like peer reputation. A rankings system that focuses only on these statistics rewards schools for being selective in their admissions processes and also for recruiting students with elite profiles. Since law schools already heavily emphasize GPA/LSAT in their admissions criteria, it is a safe bet that their ability to get elite numerical profiles to actually matriculate is a proxy for their attractiveness to applicants in general.
On the other hand, there are problems with this approach. Abandoning any emphasis on outcomes makes it hard to tell which schools are adding the most value. Focusing completely on GPA and LSAT in particular abandons any semblance of holism, and perversely rewards admissions practices which I think are bad (admitting a 4.0/180 who is unlikely to get admitted to the bar due to a criminal record, ignoring the importance of various kinds of diversity). But those concerns only really manifest themselves if people (applicants, admissions officers, employers) start actually putting stock in the input-oriented rankings. This is just a fun exercise.
Here are the results. I took the arithmetic mean of each school's 25th/50th/75th percentile and divided that by the maximum possible value (180 for LSAT, 4.33 for GPA) to create an index value for LSAT and GPA respectively. Then I discounted each index to reflect their relative predictive power according to LSAC's studies - a .36 multiplier for LSAT and .27 for GPA. Reasonable people may disagree that LSAT should be weighted more heavily in this way, but I think that this reflects the extent to which LSAT is a stronger indicator of performance and also more sought-after by schools. Brian Leiter makes a compelling argument for also taking class size into account. I was too lazy, but I agree that it's much more impressive to fill a class of >500 students with talented applicants than a class of <200.
Here's what the T20 (which really is a club of 19) looks like:
Berkeley does well on this measure. I think they are underrated in USNWR because they have strong 25th percentiles but weak medians compared to their peers. Similar story for USC. Texas suffers somewhat, probably because their cushy home-market position lets them outpunch their weight on statistics I'm ignoring like biglaw placement. No real surprises.
Elsewhere in the T50 (I didn't bother with schools outside of the US News T1), Brigham Young sees a massive jump to #20. W&M, Arizona, and SMU also perform a lot better than their USNWR rankings. Conversely, Iowa, W&L, and Wisconsin see a huge drop. I'm not sure what to make of this. I think my model is much more useful for the top 20 or so schools, which recruit nationally out of roughly the same pool of applicants. Wisconsin and BYU don't compete for very many cross-admits and their graduates don't target the same job markets, so it's less meaningful for the latter to have stronger applicants on paper.
Also interesting: the biggest numerical gaps on the whole chart are the spaces between Yale/Harvard/Stanford/Chicago/NYU and between Cornell/UCLA. I was expecting a big drop at the end of the T13 but I was surprised to see huge differences among the top 4 followed by an smooth plateau from #5 to #13. Not sure what to make of it.
I think this approach has a lot to commend it. GPA and LSAT percentiles are objective criteria that schools can't fake and do appear to be the strongest indicators we have of law-school performance. This cuts out the noise of statistics which schools can manipulate, such as employment outcomes, and positive-reinforcement mechanisms like peer reputation. A rankings system that focuses only on these statistics rewards schools for being selective in their admissions processes and also for recruiting students with elite profiles. Since law schools already heavily emphasize GPA/LSAT in their admissions criteria, it is a safe bet that their ability to get elite numerical profiles to actually matriculate is a proxy for their attractiveness to applicants in general.
On the other hand, there are problems with this approach. Abandoning any emphasis on outcomes makes it hard to tell which schools are adding the most value. Focusing completely on GPA and LSAT in particular abandons any semblance of holism, and perversely rewards admissions practices which I think are bad (admitting a 4.0/180 who is unlikely to get admitted to the bar due to a criminal record, ignoring the importance of various kinds of diversity). But those concerns only really manifest themselves if people (applicants, admissions officers, employers) start actually putting stock in the input-oriented rankings. This is just a fun exercise.
Here are the results. I took the arithmetic mean of each school's 25th/50th/75th percentile and divided that by the maximum possible value (180 for LSAT, 4.33 for GPA) to create an index value for LSAT and GPA respectively. Then I discounted each index to reflect their relative predictive power according to LSAC's studies - a .36 multiplier for LSAT and .27 for GPA. Reasonable people may disagree that LSAT should be weighted more heavily in this way, but I think that this reflects the extent to which LSAT is a stronger indicator of performance and also more sought-after by schools. Brian Leiter makes a compelling argument for also taking class size into account. I was too lazy, but I agree that it's much more impressive to fill a class of >500 students with talented applicants than a class of <200.
Here's what the T20 (which really is a club of 19) looks like:
Berkeley does well on this measure. I think they are underrated in USNWR because they have strong 25th percentiles but weak medians compared to their peers. Similar story for USC. Texas suffers somewhat, probably because their cushy home-market position lets them outpunch their weight on statistics I'm ignoring like biglaw placement. No real surprises.
Elsewhere in the T50 (I didn't bother with schools outside of the US News T1), Brigham Young sees a massive jump to #20. W&M, Arizona, and SMU also perform a lot better than their USNWR rankings. Conversely, Iowa, W&L, and Wisconsin see a huge drop. I'm not sure what to make of this. I think my model is much more useful for the top 20 or so schools, which recruit nationally out of roughly the same pool of applicants. Wisconsin and BYU don't compete for very many cross-admits and their graduates don't target the same job markets, so it's less meaningful for the latter to have stronger applicants on paper.
Also interesting: the biggest numerical gaps on the whole chart are the spaces between Yale/Harvard/Stanford/Chicago/NYU and between Cornell/UCLA. I was expecting a big drop at the end of the T13 but I was surprised to see huge differences among the top 4 followed by an smooth plateau from #5 to #13. Not sure what to make of it.