AreJay711 wrote:introversional wrote:Isn't the law, legal language, and it's constantly changing/developing interpretation far too nuanced a thing to rely on on AI, even in the limited capacity you're suggesting? (even for doc review - at least beyond keyword/phrase searching, which already exists)
Also, a set of programmed instructions can't be disbared - thus, an attny and firm will still be accountable. Will there be glitches? As a microsoft windows user, I suspect the probability of epic case fail is quite high due to "my fucking AI screwing up doc review again - lol." Can a software understand the sensibilities of a judge, cultural shift, and so on?
Anyways, as fallible as we humans are, I don't think we should outsource justice to a machine. Having said that, I do think there's a provacative idea here, but moreso related to the development of more sophisticated legal phrase/keyword/precedent searching, which probably already exists. (but as a 0L haven't yet played around w/Lexis or Westlaw)
Justice would probably be better served by a machine with algorithms to look at the nuances in cases. It would take arbitrariness out.
An algorithm would probably rely on a mathmatical probility to be reached (based on it's mechanized "interpretation" of nuanced data) before producing an answer, or result, one way or another. In this sense, the "arbitrariness" of repetition might be reduced, but the version of justice we'd have left would just be a system of probability based bias. Justice, IMO, almost has to involve some level of arbitrariness.