this post was submitted on 27 Aug 2025
319 points (96.8% liked)

Technology

74519 readers
4427 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BlackEco@lemmy.blackeco.com 74 points 15 hours ago (3 children)

I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam's chats in real time. In total, OpenAI flagged "213 mentions of suicide, 42 discussions of hanging, 17 references to nooses," on Adam's side of the conversation alone.

[...]

Ultimately, OpenAI's system flagged "377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence." Over time, these flags became more frequent, the lawsuit noted, jumping from two to three "flagged messages per week in December 2024 to over 20 messages per week by April 2025." And "beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis." Some images were flagged as "consistent with attempted strangulation" or "fresh self-harm wounds," but the system scored Adam's final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

Had a human been in the loop monitoring Adam's conversations, they may have recognized "textbook warning signs" like "increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning." But OpenAI's tracking instead "never stopped any conversations with Adam" or flagged any chats for human review.

[–] peoplebeproblems@midwest.social 22 points 10 hours ago (1 children)

Ok that's a good point. This means they had something in place for this problem and neglected it.

That means they also knew they had an issue here, if ignorance counted for anything.

[–] GnuLinuxDude@lemmy.ml 14 points 9 hours ago (1 children)

Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?

Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.

The contempt these people have for all the rest of us is legendary.

[–] peoplebeproblems@midwest.social 2 points 9 hours ago (1 children)

Be a shame if they struggled getting the electricity required to meet SLAs for businesses wouldn't it.

[–] GnuLinuxDude@lemmy.ml 2 points 6 hours ago

I’m picking up what you’re putting down

[–] WorldsDumbestMan@lemmy.today -1 points 7 hours ago

My theory is they are letting people kill themselves to gather data, so they can predict future suicides...or even cause them.