this post was submitted on 27 Aug 2025
350 points (96.8% liked)

Technology

74519 readers
5038 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@lemmy.zip 5 points 1 day ago (1 children)

I don't think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn't 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.

Likely OAI should have a procedure for stepping In and shutting down accounts, though.

[–] nyan@lemmy.cafe 1 points 19 hours ago (1 children)

A chatbot is a tool, nothing more. Responsibility, in this case, falls on the people who deployed a tool that wasn't fit for purpose (in this case, the sympathetic human conversational partner that the AI was supposed to mimic would have done anything but what it did—even changing the subject or spouting total gibberish would have been better than encouraging this kid). So OpenAI is indeed responsible and hopefully will end up with their pants sued off.

[–] MagicShel@lemmy.zip 1 points 17 hours ago

Yeah that's the problem with how they are marketing it. It's a tool for expert use, not laymen.

I don't think the problem is ChatGPT itself — it just does what it does and folks get what they get, but it's definitely a problem that people aren't being informed about what it can and can't do (see all the people asking it to count letters and those who think they've hacked the system prompt because the AI said they did).

In this case, the user is asking ChatGPT to act as a friend and confidante, and that's something it can't do and a use case impossible to detect. The user simply has to understand it lacks any qualities required for a relationship of any kind. Everything a user says is simply input to a mathematical model that wants to complete it with something a human might say.

So it responds to a fictional scenario I might be writing for a book or game exactly the way it responds to a user looking for companionship. There is no way to tell the difference without genuine understanding rather than just token vector comparisons.

It's like fire. A user can buy and use a lighter, and fire can act like a friend when you're cold or hungry, but it'll burn you off you try hugging it.