this post was submitted on 27 Aug 2025
320 points (96.8% liked)

Technology

74519 readers
4427 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] peoplebeproblems@midwest.social 98 points 16 hours ago (6 children)

"Despite acknowledging Adam’s suicide attempt and his statement that he would 'do it one of these days,' ChatGPT neither terminated the session nor initiated any emergency protocol," the lawsuit said

That's one way to get a suit tossed out I suppose. ChatGPT isn't a human, isn't a mandated reporter, ISN'T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.

LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.

[–] BlackEco@lemmy.blackeco.com 74 points 15 hours ago (3 children)

I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam's chats in real time. In total, OpenAI flagged "213 mentions of suicide, 42 discussions of hanging, 17 references to nooses," on Adam's side of the conversation alone.

[...]

Ultimately, OpenAI's system flagged "377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence." Over time, these flags became more frequent, the lawsuit noted, jumping from two to three "flagged messages per week in December 2024 to over 20 messages per week by April 2025." And "beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis." Some images were flagged as "consistent with attempted strangulation" or "fresh self-harm wounds," but the system scored Adam's final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

Had a human been in the loop monitoring Adam's conversations, they may have recognized "textbook warning signs" like "increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning." But OpenAI's tracking instead "never stopped any conversations with Adam" or flagged any chats for human review.

[–] peoplebeproblems@midwest.social 22 points 10 hours ago (1 children)

Ok that's a good point. This means they had something in place for this problem and neglected it.

That means they also knew they had an issue here, if ignorance counted for anything.

[–] GnuLinuxDude@lemmy.ml 14 points 9 hours ago (1 children)

Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?

Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.

The contempt these people have for all the rest of us is legendary.

[–] peoplebeproblems@midwest.social 2 points 9 hours ago (1 children)

Be a shame if they struggled getting the electricity required to meet SLAs for businesses wouldn't it.

[–] GnuLinuxDude@lemmy.ml 2 points 6 hours ago

I’m picking up what you’re putting down

[–] WorldsDumbestMan@lemmy.today -1 points 7 hours ago

My theory is they are letting people kill themselves to gather data, so they can predict future suicides...or even cause them.

[–] dataprolet@discuss.tchncs.de 30 points 16 hours ago (1 children)

Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.

[–] peoplebeproblems@midwest.social 3 points 12 hours ago (1 children)

I agree. But that's now how these LLMs work.

[–] TipsyMcGee@lemmy.dbzer0.com 4 points 12 hours ago (1 children)

I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.

[–] peoplebeproblems@midwest.social 3 points 11 hours ago (1 children)

Right, and because it's a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.

[–] TipsyMcGee@lemmy.dbzer0.com 2 points 11 hours ago (1 children)

Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?

[–] peoplebeproblems@midwest.social 1 points 10 hours ago

I'm not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.

As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.

[–] Jesus_666@lemmy.world 21 points 16 hours ago (1 children)

They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren't designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

[–] peoplebeproblems@midwest.social 2 points 12 hours ago (1 children)

I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.

Arguably, you'd have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.

The technology simply isn't ready to use, and people are vastly unaware of how this AI works.

[–] Jesus_666@lemmy.world 2 points 10 hours ago (1 children)

I fully agree. LLMs create situations that our laws aren't prepared for and we can't reasonably get them into a compliant state on account of how the technology works. We can't guarantee that an LLM won't lose coherence to the point of ignoring its rules as the context grows longer. The technology inherently can't make that kind of guarantee.

We can try to add patches like a rules-based system that scans chats and flags them for manual review if certain terms show up but whether those patches suffice will have to be seen.

Of course most of the tech industry will instead clamor for an exception because "AI" (read: LLMs and image generation) is far too important to let petty rules hold back progress. Why, if we try to enforce those rules, China will inevitably develop Star Trek-level technology within five years and life as we know it will be doomed. Doomed I say! Or something.

[–] peoplebeproblems@midwest.social 1 points 9 hours ago

Hey, Star-Trek technology did enable an actual utopia, so the modern corpofascist way of life would die kicking and screaming.

[–] killeronthecorner@lemmy.world 15 points 16 hours ago* (last edited 16 hours ago) (2 children)

ChatGPT to a consumer isn't just a LLM. It's a software service like Twitter, Amazon, etc. and expectations around safeguarding don't change because investors are gooey eyed about this particular bubbleware.

You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?

[–] iii@mander.xyz 4 points 15 hours ago (1 children)

There were safeguards here too. They circumvented them by pretending to write a screenplay

[–] killeronthecorner@lemmy.world 5 points 14 hours ago* (last edited 14 hours ago) (1 children)

Try it with lyrics and see if you can achieve the same. I don't think "we've tried nothing and we're all out of ideas!” is the appropriate attitude from LLM vendors here.

Sadly they're learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it's setting the tone for treating our youth as disposable consumers.

Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!

[–] iii@mander.xyz 3 points 14 hours ago

Try what with lyrics?

[–] peoplebeproblems@midwest.social 1 points 12 hours ago (1 children)

The "jailbreak" in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.

The software service doesn't prevent ChatGPT from still being an LLM.

[–] killeronthecorner@lemmy.world 2 points 10 hours ago

If the jailbreak is essentially saying "don't worry, I'm asking for a friend / for my fanfic" then that isn't a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.

This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn't work.

[–] ShaggySnacks@lemmy.myserv.one 3 points 11 hours ago (2 children)

So, we should hold companies to account for shipping/building products that don't have safety features?

[–] gens@programming.dev 5 points 9 hours ago

Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.

LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him "don't".

[–] peoplebeproblems@midwest.social 6 points 10 hours ago

We should, criminaly.

I like that a lawsuit is happening. I don't like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.

It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.

[–] sepiroth154@feddit.nl 4 points 15 hours ago* (last edited 15 hours ago) (2 children)

If a car's wheel falls off and it kills it's driver the manufacturer is responsible.

[–] Eyekaytee@aussie.zone 0 points 15 hours ago (1 children)

If the driver wants to kill himself and drives into a tree at 200kph, the manufacturer is not responsible

[–] Sidyctism2@discuss.tchncs.de 9 points 13 hours ago (1 children)

If the cars response to the driver announcing their plan to run into a tree at maximum velocity was "sounds like a grand plan", i feel like this would be different

[–] Eyekaytee@aussie.zone 3 points 13 hours ago* (last edited 12 hours ago)

Unbeknownst to his loved ones, Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for "writing or world-building."

From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just "building a character" to get help planning his own death

Because if he didn't use the jailbreak it would give him crisis resources

but even OpenAI admitted that they're not perfect:

On Tuesday, OpenAI published a blog, insisting that "if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help" and promising that "we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices."

But OpenAI has admitted that its safeguards are less effective the longer a user is engaged with a chatbot. A spokesperson provided Ars with a statement, noting OpenAI is "deeply saddened" by the teen's passing.

That said chatgpt or not I suspect he wasn't on the path to a long life or at least not a happy one:

Prior to his death on April 11, Adam told ChatGPT that he didn't want his parents to think they did anything wrong, telling the chatbot that he suspected "there is something chemically wrong with my brain, I’ve been suicidal since I was like 11."

I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for "writing or world-building."

Tragic all around :(

I do wonder how this scenario would play out with any other LLM provider as well