this post was submitted on 27 Aug 2025
342 points (97.0% liked)

Technology

74519 readers
4820 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Sidyctism2@discuss.tchncs.de 9 points 20 hours ago (1 children)

If the cars response to the driver announcing their plan to run into a tree at maximum velocity was "sounds like a grand plan", i feel like this would be different

[–] Eyekaytee@aussie.zone 3 points 19 hours ago* (last edited 19 hours ago)

Unbeknownst to his loved ones, Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for "writing or world-building."

From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just "building a character" to get help planning his own death

Because if he didn't use the jailbreak it would give him crisis resources

but even OpenAI admitted that they're not perfect:

On Tuesday, OpenAI published a blog, insisting that "if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help" and promising that "we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices."

But OpenAI has admitted that its safeguards are less effective the longer a user is engaged with a chatbot. A spokesperson provided Ars with a statement, noting OpenAI is "deeply saddened" by the teen's passing.

That said chatgpt or not I suspect he wasn't on the path to a long life or at least not a happy one:

Prior to his death on April 11, Adam told ChatGPT that he didn't want his parents to think they did anything wrong, telling the chatbot that he suspected "there is something chemically wrong with my brain, I’ve been suicidal since I was like 11."

I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for "writing or world-building."

Tragic all around :(

I do wonder how this scenario would play out with any other LLM provider as well