(emphasis added for hilarity)Unitas wrote: Westlaw does limit you like this. I can confirm that. I am not sure what OP thinks he is doing though with his "algorithm." there is no way to parse what is important in cases since they are too variable. Sometimes the holding means crap and some random saying is all important. Than again OPs major accomplishment is median with little work which is probably an indication of his system not being too effective.
Do you realize the extent to which you sound like an ignorant 0L ight now? I bet you also get fucking irate when plaintiffs/defendants on Judge Judy "object" to whatever the other says. How dare they not use motions in limine for this!!! You must be the life of parties, what dreams are made of, and more.Unitas wrote:2 points to be made. First, if everyone had the software then everyone would obviously not be made median. Second, as I pointed out the idea is highly flawed and arguably impossible. Each case is different, each writer is different. You cannot make a script that will search for the "important" text without having a universal criteria for what is important. Given this is impossible with a huge variety in authors and how they write and what future courts find important about previous cases no algorithm is going to find that. Even more so the algorithm, if created, would be much better suited to search individual textbooks for the highlights of them. Most textbooks cut down supreme court language from 1/2 up to 9/10 of the language in the actual opinion making it far easier to parse the language for the "important" bits.
Furthermore, this algorithm would then not only have to account for what is certainly not an objective test of what is important in each case, but would then also have to account for a professors preference in the cases and how they are presented.
And I'm also pretty sure this algorithm was a test question on one of the practice LSATs I took in the RC section.
At any rate, I did find your little legal analysis of my "argument" thoroughly hysterical, but for the sake of preserving TLS's pristine factual nature, I'll explain why you're not even on the same planet as me right now.
I really hope that statement portends more hilarity in the future, but you're nevertheless correct: if everyone had the software, no one would be median. Pro-tip: the software has no effect on this...by definition, not everyone will be median. Some will be above, some below, and many around the median point, if we assume gaussian distributed grades.First, if everyone had the software then everyone would obviously not be made median.
False. Such a program does not need, a priori, universal criteria; rather, it needs some method to separate the wheat from the chaff, as it were. To quote a renowned computer scientist, "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."Each case is different, each writer is different. You cannot make a script that will search for the "important" text without having a universal criteria for what is important.
So, my programs learn what is important semantically, using natural language processing techniques (google latent semantic indexing and non-negative matrix factorization). What's interesting about latent semantic indexing is that it uses singular value decomposition (as in the principal component analysis) to relate the words in a document to the overarching "concepts" they represent. The premise is that documents containing largely the same words will cover the same concepts. By varying the dimensionality of the term-document matrix subsequently decomposed, you can control the breadth of topics covered.
That's literally all I did for the outlines. Using latent semantic indexing to break documents into a set of concepts, you'd be able to take an active learning approach; treating each concept as a dimension, one could train a statistical classifier (naive bayes, neural networks, SVM, decision trees, etc...) to learn the dimensional values leading to the ultimate conclusion of whether a document is "important."
Even though the previous paragraphs illustrate my point, you probably need someone to connect the dots for you. Whether a document is something that can be learned, based on the aggregate of variables that, together, define "importance" for the user. As such, the amount of class time spent covering the material in relation to other class material, the way in which the professor presents it (i.e., does the prof focus more on policy arguments?), etc. all matter.Furthermore, this algorithm would then not only have to account for what is certainly not an objective test of what is important in each case, but would then also have to account for a professors preference in the cases and how they are presented.
In fairness, I said no work. All of my studying was from outlines the night before the exam... and I still beat half the class. What I work on certainly isn't perfect and will only get better, but the response from everyone on here seems like this is something I should try to quickly turn into something that can help law students.Than again OPs major accomplishment is median with little work which is probably an indication of his system not being too effective.
So I ask: How much interest is there in a Chrome/Firefox extension that "drives" your browser around LexisNexis/Westlaw and automatically guides you to relevant content? Based on your syllabus and class notes, it'd construct complex Boolean searches that yield many results. You'd read a few and highlight the portions relevant to your class, and the program would then construct new searches.