this post was submitted on 19 May 2025
419 points (96.2% liked)
A Boring Dystopia
12223 readers
554 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
how long will it take an 'ai' chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?
We're already there. Though that violence didn't happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.
Wrong. Massively wrong. This mother is 180% at fault for the death. C.AI has been heavily censored, time and again. The reality is that this kid was already mentally ill, and he persuaded the bot to act like that. I am an ex-user, and bots there just don't do those things unless YOU press them to act like that. SMH.
Yeah, what an awful mom for not knowing enough about the brand new technology her 14 year old discovered. How dare busy parents not know everything about extremely recent technological developments! Every parent should not only 100% know everything their 14 year old is doing online at all times, but they should also be at least as up-to-date on tech news as you are. Any less than that should be considered negligence!
Oh, but the company in control of the service doesn't need to provide any sort of safeguards to prevent this sort of tragedy. It's not like other mentally-ill individuals (children and adults alike) will get hurt by AI chatbots affirming their delusions. And if they do, there's always someone besides the company that can be blamed!
Have you even fucking seen the platform??? Making your opinion from the article, smooth one. They have constant reminders that it's not real, and not to believe what the bots say. No, parents shouldn't be 1984 24/7 monitoring their kids, but if you have no fucking idea what your kids are doing online, you are an idiot, regardless of busy-ness.