342
this post was submitted on 27 Aug 2025
342 points (97.0% liked)
Technology
74519 readers
4820 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree. But that's now how these LLMs work.
I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.
Right, and because it's a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?
I'm not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.