this post was submitted on 21 Jul 2025
49 points (93.0% liked)

TechTakes

2080 readers
123 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

While the thought of lawyers lawyering with AI gives me the icks, I also understand that at a certain point it may play out like the self-driving car argument: once the AI is good enough, it will be better than the average human -- since I think it's obvious to everyone that human lawyers make plenty of mistakes. So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it's still a net gain. On the other hand tho, this could lead to complacency that drives even more injustice.

top 8 comments
sorted by: hot top controversial new old
[–] V0ldek@awful.systems 10 points 10 hours ago

Feels like overlooking the same issue as with every other AI use

When a human makes a mistake and is called out, they can usually fix the mistake. When genAI outputs nonsense, it's fucking nonsense, you can't fix something that's fundamentally made up, and if you try to "ask it" to fix it it'll just respond with more nonsense. I hallucinated this case? Certainly! Here's 3 other cases you could cite instead: 3 new made up cases

[–] HeyThisIsntTheYMCA@lemmy.world 2 points 10 hours ago

Yeah, I don't care about the raw amount of mistakes, I care whether the mistakes are severe enough to throw the case. Stuff like missing filing deadlines.

[–] Architeuthis@awful.systems 23 points 23 hours ago

So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it’s still a net gain.

thats-not-how-any-of-this-works.webm

[–] corbin@awful.systems 11 points 23 hours ago

It's hard to get into the article's mood when I know that Lexis not only still exists but is now part of the Elsevier family; this is far from the worst thing that attorneys choose to do to themselves and others. Lawyers have been caught using OpenAI products in court filings and court appearances, and they have been punished accordingly; the legal profession does not seem prepared to let a "few hallucinated citations go overlooked," to quote the article's talking head.

[–] lagoon8622@sh.itjust.works 10 points 1 day ago

They will ignore your errors and respond with errors of their own. AI will decide you're guilty and deny your appeal.

A later case exactly like yours will result in an innocent verdict because the case used one different word and the butterfly effect will cause the AI to add the word "not" to the verdict.

AI will conclude that your case was ruled in error, but there's nothing it can do because the appeal was already denied

[–] DeathsEmbrace@lemmy.world 5 points 1 day ago (1 children)

Except nobody wants to tali about that 1.2 being it thinks green is an object or something completely fucked.

[–] artifex@piefed.social -2 points 1 day ago (1 children)

Yeah, one real concern is that bottom-of-the-barrel lawyers will continue to just use their $20/month chatGPT subscription, and not something more lawyer-centric that will (eventually) be able to weed out the true stupidity almost 100% of the time.

[–] wizardbeard@lemmy.dbzer0.com 7 points 23 hours ago

Subscription? You think these people are paying to use the slop machine? I'd expect these people are using it partly because they can just use the free tier.