Desert Fox wrote:LOL at OP changing the weight of his index until he got the results he desired.

Desert Fox wrote:You don't get to call yourself an empiricist when you change your formula til you get results you want.

Haha. Yeah, that's pretty much what happened. But if you'll allow me the opportunity, I'd like to convince you that I wasn't being disingenuous.

Your charge is that I was being intellectually dishonest by toying with the index weights at each school in order to gin-up the results I was after. I'll admit to a methodological mistake (I already have), but I devised my methodology earnestly. It was flawed, but I wasn't trying to obfuscate.

That I used the iteration routine in the first place is not prima facie evidence of my dishonesty, although I understand how it would make you suspicious. So why did I iterate, if not to be sneaky? Statistics! (Well, bad statistics) Here was my thinking (more formalized than it was in my head), with methodological errors called out and footnoted:

I started with the null hypothesis that YP does not exist and the alternative hypothesis that it does. I then asked whether my null hypothesis explains the data. I had to come up with a test that would show whether the null hypothesis was rejected. So I asked what the data would look like if the alternative hypothesis were true. To me, YP means that a given school waitlists candidates for being over-qualified. If this were true, I surmised, then a given school would be more likely to accept certain "appropriately-qualified candidates" while waitlisting the most qualified ones (where "most qualified" means that they have strong numbers). I reckoned a single index combining relative LSAT and GPA among candidates was a reasonable proxy for quality [1]. But introducing the index also introduced another degree of freedom: the weights of LSAT/GPA in that index. Not having priors to go on (an intuition that might come from months or years of participating on these fora), I made what I thought was a reasonable assumption of 50/50 weights. But when I compared American's admissions data to Columbia's, I realized that different schools must use different weights. So I needed a routine to estimate the weights. The iterative routine made sense to me in the context of my null hypothesis. In other words, I was trying to see if the admissions data could be explained without relying on YP.[2]

[1] As user mst and others have pointed out, an index (no matter the weights) is not a good proxy. Some schools prefer LSAT scores to be over a certain threshold, but the returns on higher LSATs are diminishing (and, in the case of YP'ing schools, negative!). These schools treat splitters with a degree of caution. The same could be true of GPA. It boils down

to the fact that the relationship between acceptance rate and each of LSAT and GPA is both nonlinear and multi-dimensional.

[Ed note: Final sentence added for clarity.][2] I see now that I presented my findings unfairly. My original UVA graph only showed that it's possible UVA does not YP; it did not show evidence that UVA does not YP. Absence of evidence is not evidence of absence; or, in statistical jargon, I mixed up my null and alternative hypotheses. An amateurish mistake--but hey, I'm an amateur!

Desert Fox wrote:Run the data with the published forumas that were posted a couple pages ago and repost.

I already posted the chart using 60/40 weights for UVA. I think this shows that you and others are correct about the weights, so no need to rub my nose in it. Further, I think the matrix I posted the other day is more informative. But I'm feeling generous, so here's the specific graph you're after: