Hey-O wrote: acrossthelake wrote:
You might enjoy "The Black Swan" by Taleb. It is a very interesting look at the limits of statistical analysis and human predictive capabilities.
I had a seminar that discussed that book last semester. This is true, though statistical predictions often do better than flat-out human predictions.
Really? I would be interested in this. Where are you getting this from? Is this general human predictions or this targeted predictions? For instance, I would take the prediction of a particular's student teacher over the prediction of how that student would do based on test scores.
If you take a look at that book by Hastie and Dawes they go over it.
Basically (almost?) all research that pits human against statistical model (made by humans) for a particular prediction finds that the model wins often even over the best performing human. Here let me quote from it.
...Paul Meehl published a highly influential book in which he reviewed approximately 20 studies comparing the clinical judgment of people(expert psychologists and psychiatrists in his study) with the linear statistical based only on relationships in the empirical data on the events of interest. In all studies evaluated, the statistical method provided more accurate predictions(or the two methods tied)...Sawyer reviewed 45 studies comparing clinical and statistical prediction. Again, there was not a single study in which clinical global judgment was superior to the statistical prediction...Sawyer...even included two studies in which the clinical judges had access to more information(an interview with each person being judged) but still did worse....
Goldberg asked experienced clinical diagnosticians to distinguish between neurosis and psychosis on the basis of personality test scores(a decision that has important implications for treatment and for insurance coverage in psychotherapeutic practice). He constructed a simple linear decision rule....Starting with a new sample of patient cases and using the patient's discharge diagnoses as the to-be-predicted criterion value, "Goldberg's rule"(the model) achieved an accuracy rate of approximately 70%. The human judges, in comparison, performed at rates from slightly above chance(50%) to 67% correct. Not even the best human judge was better than the mechanical adding-and-subtracting rule...
I led a 1-hour discussion in my seminar last semester actually about whether using numbers that prove to be more predictive than any other measure was a rational choice in law school admissions.
My professor(who specializes in rational decision making) said he was actually surprised to hear that law school admissions followed the only process that researchers in decision making would agree with. There's a lot in the book about how holistic measures, such as interviews, are often poorly consistent and predictive because of the natural fallibility of human judgment.
They also had this to say about holistic versus numbers-based grad school(like, for specific subjects) admissions:
Such results bring us to an unsettling conclusion: A lot of outcomes about which we deeply care about are not very predictable. For example, it is not comforting to members of a graduate school admissions committee to know that only 23% of the variance in later faculty ratings of a student can be predicted by a unit weighing of the student's undergraduate GPA, his or her GRE score, and a measure of the student's undergraduate institution selectivity--but that is in comparison to 4% based on those committee members' global ratings of the applicant. We want to predict outcomes that are important to us. It is only rational to conclude that if one method does not predict well, something else may be better. What is not rational--in fact,it's irrational--is to conclude that this "something else" necessarily exists, and in the absence of positive supporting evidence, that it's intuitive global judgment.