Also, there's a lot of people who will get agitated with the notion that law grades are mostly random noise and typing speed. Bringing this up on here has created quite a few arguments, where if you finished with good grades, you tend to think law exams are perfect measures of talent and preparation, whereas if you finished with mediocre grades, you tend to think they're BS.
Most of the objective evidence on the subject, however, points to the latter:
http://www.nesl.edu/userfiles/file/lawr ... /crane.PDF
In 1976, the Law School Admission Council published the results of a study by Stephen P. Klein and Frederick M. Hart supporting the idea that factors other than substantive knowledge affect essay grades. One factor that correlated highly with success on law school essay examinations was legible handwriting. Another leading indicator of higher grades was length. Longer answers were viewed by law professors as better.
Law schools have an obligation to use the most accurate and internally consistent, or reliable examination methods. The essay exam is inherently capricious not only because of the number of subjective factors used in scoring that influence the student's overall grade; but also because they compare law students based on too few samples of each student's knowledge of a given domain of material to be reliable or statistically valid.
The traditional law school essay exam is mathematically unsound and unable to consistently measure the law student's proficiencies within the law school's curriculum. This is due to an inability to either accurately sample the same amount of material or to render the same number of samples of a given domain of material as an objective exam can within a comparable time period. Therefore, single-shot essay exams used to measure numerous domains of information within each larger law school subject are notoriously subjective and unreliable. Accordingly, they are also invalid for their intended purpose. This is especially true given the enormous importance placed on the results of law school essay examinations and because those results are used to compare students' performances.
The essay exam format is inherently incapable of affording law students an adequate opportunity to demonstrate proficiency in an entire subject. It is infeasible for the professor to draft an essay exam that is capable of sampling a sufficient quantity of information from various the domains of a complex subject. If the professor were to draft successfully an essay examination that was lengthy enough to contain enough questions for the examination to be considered valid, it would be impossible for the student to actually complete the examination within normal time constraints; and various physical and psychological phenomenon would hinder the students ability to perform well during the course of completing such an arduous task. Critics of essay examinations doubt that their unreliability can be lowered to a level that makes them valid.
Like I've said before, since this study was done, SoftTest has eliminated the handwriting problem, but the fact that exams are now done on computers just mean that typing speed has taken its place.
If you look at page 850 on this Google Book link, you'll see more details about what this study found:
http://books.google.com/books?id=XQgrjw ... dy&f=false
Law exam essay grades are best predicted by a combination of a) word count and b) a layman's impression of correctness (in this case, two English majors who never went to law school and had no knowledge of the law were the "laymen"). So basically, if you can write a lot of words and impress an English major, you're in good shape, whether you know the law or not. Both these factors combined predicted about 50% of the grading variance (r = .70). That's a huge number. And, as the authors note, its artificially low, because they only had two English majors give ratings, which depressed inter-rater reliability, which in turn artificially depressed that variable's validity measure.
Basically, if you had a bigger group of English majors give ratings, you could predict nearly 60% of a law exam's grade based only on those ratings alone and word count. And none of those factors have anything to do with knowing and applying the law, since the English majors never took a single law class in their life. And knowing/applying the law is what law exams are supposedly measuring.
The study does note that LSAT predicts something like 16 additional percentage points of variance, but at most schools where the top 51% of the class is jammed into an LSAT range of about 2-3 points, this measure of intelligence probably loses a lot of its predictive power.
Overall, we're talking about not much more than one third of your law exam grade is actually knowing and applying the law--at best. The rest is word count and writing ability. If you can crank out 7-8K words in three hours, and it's at least superficially good writing, you're probably in good shape.
Thus are the risks of this law school gamble--and that's exactly why effort and intelligence, no matter how high, are no guarantees that you will succeed.