Page 1 of 2

TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 7:00 pm
by Stats
Intro:

In order to satisfy my own curiosity, I recently decided to code and analyze the data points posted in last year’s OCI/EIW results thread (http://www.top-law-schools.com/forums/v ... 3&t=130597). As the dataset is somewhat unique, and some of the results are potentially useful, I thought I would share a bit of what I found.

Methodology:

I chose three dependent variables to use in the analysis:
  • The absolute number of offers from law firms [note that this is mostly included as a point of comparison--it's not especially informative when viewed in isolation because it depends upon a number of important factors that are not controlled for (e.g., bidder strategy, confidence, luck, and the number of callback interviews that the respondent decided to attend)]
  • The percentage of OCI/EIW screening interviews that were converted into callback interview offers (“Screening Interview Conversion Rate”)
  • The percentage of callback interviews accepted that were converted into offers of employment for the following summer (“Callback Interview Conversion Rate”)
I used the following explanatory variables:
  • School rank, based on average 2010 USNWR ranking in each grouping of schools (e.g., CCN = 5)
  • Transfer differential – the difference in USNWR ranking between the school attended during 1L and the school attending during 2L (0 if the respondent did not transfer)
  • Approximate law school rank as a percentile (e.g., top 10% = 90, not .10 or .90)
  • Legal markets in which the respondent pursued employment
  • Law review membership
  • Work experience (1 if any, 0 otherwise)
  • IP background
  • URM status
  • Self-assessed interview ability*
*This variable is slightly problematic: several posts suggested respondents were considering their results when assessing their interview ability, which creates what stats people call an endogeneity problem. Nevertheless, the variable is potentially interesting, so I’ve posted versions of each regression with and without this variable.

Caveat: The sample is weighted in two ways that could potentially bias the results: (1) most respondents attended a T50 or better school; and (2) most respondents did quite well during 1L (the median reported class rank was better than top 25%). Without getting into too much detail here, I think both of these weightings are likely to bias the results in the direction of understating the importance of law school rank and class rank.

Results:

Image

Image

--ImageRemoved--

Analysis:

These results could be discussed at length in an academic format, but let me draw your attention to the aspects that I find most interesting:
  • School rank appears to play a role that goes beyond the mere contextualization of law school grades. Compare the magnitude and significance of law school rank during the screening stage and during the callback stage with the magnitude and significance of class rank during the same stages, respectively—law school rank actually becomes more important during the callback stage while class rank appears to play a much smaller (if any) role.
  • Transferring pays off! While transfer students appear to be treated as though they did not transfer at all during the screening stage, which is what we might expect ex ante, they get a significant boost during the callback stage (the importance of school rank goes up, yet the magnitude and significance of the transfer differential variable goes down).
  • Law review membership doesn’t appear to buy much during OCI. This result surprised me enough that I ran another halfdozen regressions (including regressions incorporating various interaction terms and proxies for firm selectivity/prestige), but I was unable to find a statistically significant effect in any regression.
Anyway, I’m happy to elaborate on any aspect of this if anyone is interested in discussing this admittedly nerdy endeavor.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 7:20 pm
by PriyaRai
Wow

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 7:30 pm
by Kswizzie
Interesting stuff... no significance for Work Ex... just curious do you have a p-score for that?

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 7:41 pm
by Stats
Kswizzie wrote:Interesting stuff... no significance for Work Ex... just curious do you have a p-score for that?
For the total offers model, it's in the range of .2, and for the other models its north of .5. I think the problem here is that my dummy variable groups people with 1-2 years of mediocre WE between UG and LS (a substantial number of respondents) with those who have substantial professional experience (e.g., 3-5 years as a CPA or whatnot). I wouldn't draw too many conclusions from this particular result.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 8:25 pm
by lovelaw27
Stats, can you expalin what the transfer differential means to someone who doesn't know anything about regression analysis.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 9:07 pm
by Stats
lovelaw27 wrote:Stats, can you expalin what the transfer differential means to someone who doesn't know anything about regression analysis.
Sure! Suppose a student gets top 5% grades at a school ranked 40 and then transfers to a school ranked 10. During OCI, how should the employers view the student? One would expect that they would not view his class rank as directly comparable to the class ranks of his new classmates--after all, the competition was likely stronger at the school ranked 10 than it was at the school ranked 40. To account for this, one would imagine that they would effectively adjust his rank downwards in accordance with the difference between the new school and the old school (for example, moving from GULC to Penn would likely result in a relatively small adjustment, whereas moving from Hofstra to Penn would likely result in a relatively large adjustment). "Transfer differential" is the variable I use to capture this adjustment.

If you look at the coefficients in the study, you'll see that during the screening process, any improvement in school rank achieved through transferring is eliminated entirely by this adjustment. In the example above, if you just look at the School Rank variable, a move from a school ranked 40 to one ranked 10 would seem to improve the screening conversion rate by about 15% (-.5%*-30 = 15%); but then you have to factor in the transfer differential, which decreases the screening conversion rate by slightly more than 15% (-.51%*30 = -15.3%). But during the callback stage, the transfer differential adjustment is much less (and possibly nil).

On net, then, the transfer from 40 to 10 will not improve this hypothetical student's screening conversion rate, but it may improve his callback conversion rate by as much as 23% (-.78%*-30 = 23.4%)! Given that the average callback conversion rate is around 50%, this could potentially be a huge benefit for the student.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 9:40 pm
by dood
u on addy bro?

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:03 pm
by Stats
dood wrote:u on addy bro?
touche. probably went a bit overboard on this, but I felt compelled to try to find some answers to the questions that always seem to get asked around OCI each year. now there's some empirical evidence to verify or, to some extent, falsify the common wisdom that is so frequently recited.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:05 pm
by rman1201
Good work, this is pretty awesome. Do you think the sample is large enough considering all of the interactions?

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:07 pm
by mbusch22
tag

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:23 pm
by Renzo
I approve of this endeavor. Seek funding for a more formal survey of law students; you could quantify the "value added" of grades & school prestige!

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:26 pm
by fatduck
seriously outstanding work. thank you.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 10:34 pm
by Stats
rman1201 wrote:Good work, this is pretty awesome. Do you think the sample is large enough considering all of the interactions?
That's a fair question, and one I discussed a bit with a stats prof I know. I don't think it's the case that any single datapoint is substantially skewing any of the results--indeed, I used a few of the standard statistical tests to search for "outliers" and didn't find anything that fit the bill (other than a coding error, which I corrected). On the other hand, some of the subtler effects are probably being lost because of the sample size--for example, grades probably do have *some* effect during the callback stage--but this shouldn't jeopardize the broader findings, especially with respect to the relative importance of various factors.

Re: TLS 2010 OCI Study

Posted: Wed Feb 16, 2011 11:57 pm
by Sup Kid
I'd imagine that a majority of the posters attend T14 schools. Does that affect any of these statistics, or change the influence of the sample size issue?

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 3:10 am
by Knock
fatduck wrote:seriously outstanding work. thank you.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 3:13 am
by 20160810
This is the best thread I've seen in a long, long, long time.

The rest of you: Be more like this guy.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 3:16 am
by Curry
Knock wrote:
fatduck wrote:seriously outstanding work. thank you.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 3:30 am
by Renzo
SBL wrote:This is the best thread I've seen in a long, long, long time.

The rest of you: Be more like this guy.
Whatever, like you're doing anything around here, other than hanging around, acting all smug in your red username.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 11:04 am
by Stats
Sup Kid wrote:I'd imagine that a majority of the posters attend T14 schools. Does that affect any of these statistics, or change the influence of the sample size issue?
Yes, I think a little more than half attended T14s, and almost everyone attended a T50 or better. The biggest issue with this is that it limits applicability to the top 50 schools or so. It's not clear that the factors identified hear have the same effects at substantially lower ranked schools (especially T3s and T4s).

As I noted in the caveat, it's also likely that this weighting in the data, in conjunction with the disproportionate academic success reported by respondents, biases the study a bit in the direction of understating the magnitude of these effects. For example, if firms largely use grades and school rank as threshold criteria, we may be missing out on some threshold effects if substantially all respondents are above the thresholds.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 1:03 pm
by seespotrun
Stats, are you thesealocust's alt?

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 1:07 pm
by keg411
The transfer information is really helpful! Thanks :).

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 1:23 pm
by Stats
seespotrun wrote:Stats, are you thesealocust's alt?
Nope

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 1:33 pm
by 09042014
So Chicago + IP = win?

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 5:12 pm
by Stats
Desert Fox wrote:So Chicago + IP = win?
:D No, I wouldn't say that. As I mentioned in the OP, the first regression (total number of offers) is really just there to serve as a point of comparison. In isolation, it's of limited use because it doesn't control for, among other things, confidence, level of risk aversion, and indecisiveness. All of these things are likely to strongly affect the absolute number of offers, but not the screening and callback conversion rates. This makes the first regression much less informative than the second and third regressions.

With respect to legal markets, I think the numbers provide strong support for the notion that DC is the hardest legal market to crack. If Chicago were, likewise, the easiest market to crack, we'd see significantly higher screening and callback conversion rates (the opposite of DC), but that's not the case.

Re: TLS 2010 OCI Study

Posted: Thu Feb 17, 2011 5:15 pm
by 09042014
Stats wrote:
Desert Fox wrote:So Chicago + IP = win?
:D No, I wouldn't say that. As I mentioned in the OP, the first regression (total number of offers) is really just there to serve as a point of comparison. In isolation, it's of limited use because it doesn't control for, among other things, confidence, level of risk aversion, and indecisiveness. All of these things are likely to strongly affect the absolute number of offers, but not the screening and callback conversion rates. This makes the first regression much less informative than the second and third regressions.

With respect to legal markets, I think the numbers provide strong support for the notion that DC is the hardest legal market to crack. If Chicago were, likewise, the easiest market to crack, we'd see significantly higher screening and callback conversion rates (the opposite of DC), but that's not the case.
I was joking. There probably weren't enough samples of chicago bids to make clear trends. Conventional wisdom is NYC is better.