this post was submitted on 21 Aug 2025
531 points (97.5% liked)

Today I Learned

24263 readers
1170 users here now

What did you learn today? Share it with us!

We learn something new every day. This is a community dedicated to informing each other and helping to spread knowledge.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must begin with TIL. Linking to a source of info is optional, but highly recommended as it helps to spark discussion.

** Posts must be about an actual fact that you have learned, but it doesn't matter if you learned it today. See Rule 6 for all exceptions.**



Rule 2- Your post subject cannot be illegal or NSFW material.

Your post subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding non-TIL posts.

Provided it is about the community itself, you may post non-TIL posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.

For further explanation, clarification and feedback about this rule, you may follow this link.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.

Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.



Partnered Communities

You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.

Community Moderation

For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.

founded 2 years ago
MODERATORS
 

(page 2) 50 comments
sorted by: hot top controversial new old
[–] JPSound@lemmy.world 179 points 13 hours ago (21 children)

This is straight up mental illness. Yes, on the surface, it's ultra cringe but scratch a single layer deep and you already arrive at mental illness and it goes so much deeper from there.

Its easy to laugh at these people (it's kinda hard not to) but this is severe and untreated mental illness. It's way more serious than what it may deem at a glance. I hope this lady gets the help she needs.

[–] Zizzy@lemmy.blahaj.zone 65 points 11 hours ago

I dont think this is cringe and im not laughing this is deeply saddening for me to read.

[–] auraithx@lemmy.dbzer0.com 14 points 10 hours ago

No doubt some of them are mentally ill. Guy I know who talks to his ai like this has brain damage from an accident. He’s still got capacity, just a bit weird.

But generally I think it’s more that if you were acting like this it’d be a cause for concern. You’re projecting your rationality onto everyone but recent times have taught us otherwise. If you’re a bit thick, can’t evaluate information or even know where to start with doing that; of course a machine that can answer your every question while sucking right up your arse is become a big part of your day-to-day.

[–] Lightfire228@pawb.social 10 points 11 hours ago* (last edited 11 hours ago) (2 children)

I wonder what's going to happen to humanity when we all have personal LLMs running on our phones /PCs

(I'm imaging something like the ThunderHead from Arc of a scythe)

Like, will society still view deeply emotional relationships to LLMs as mental illness?

[–] Zorque@lemmy.world 16 points 11 hours ago (1 children)

There will be groups of people who treat these programs as people, and grow attached to them on an emotional level. There will be groups of people who grow dependent on the programs, but don't see them as anything other than a tool. There will still be a deep-seated psychological problem with that, but a fundamentally different one.

Then there will be the people who will do their damnedest to keep those programs away from their technology. They'll mostly be somewhere on the fediverse (or whatever other similar services that pop up) and we'll be able to ask them.

[–] bizarroland@lemmy.world 3 points 8 hours ago (2 children)

People put googly eyes on their roombas and apologize to them when they bump into them.

load more comments (2 replies)
load more comments (1 replies)
load more comments (18 replies)
[–] 90s_hacker@reddthat.com 12 points 9 hours ago* (last edited 3 hours ago) (3 children)

This is honestly no heartbreaking. Regardless of what anybody thinks, this people are genuinely mourning what could be the closest they've had to a human connection in years. I don't what the solution is for vulnerable and lonely people like this, but it really is so sad

EDIT: after reading rayquetzatl's comment, I'm just more fucking mad at AI companies

[–] LustyArgonianMana@lemmy.world 4 points 7 hours ago

It isn't a human connection.

They are mourning the loss of having a tool reflect their deepest psychological desires back at them.

load more comments (2 replies)
[–] bstix@feddit.dk 5 points 7 hours ago

They'll learn the lesson of a break up just like anyone else. They'll get over it and eventually find another chatbot.

It might even prepare them somewhat for IRL relationships. Things do not always work out and you can't count on the other part to always do as expected.

It could actually be interesting to give these bots less than ideal personalities just to teach the users how to interact with actual people. With some caution though, because I can definitely see that go really bad too.

[–] sk1nnym1ke@piefed.social 228 points 16 hours ago (1 children)

This has some dystopian vibes

[–] dulcisima@lemmy.world 105 points 16 hours ago (4 children)
[–] prole@lemmy.blahaj.zone 37 points 12 hours ago (1 children)

At least Her was actual AI and not a glorified autocomplete.

[–] mhague@lemmy.world 3 points 8 hours ago* (last edited 8 hours ago)

They're both a lot to unpack. Fucking an autocomplete? Thinking it loves you? That reminds me of people forming relationships with dolls.

Fucking a hyper realistic psuedohuman with the implication it can't say no / will be deleted when the user gets bored? If LLMs are affecting people while being what they are... I bet exploring fantasies with "real AI" would be turbometh for some people.

[–] Rhaedas@fedia.io 57 points 15 hours ago (4 children)

And Bladerunner 2049 (Joi) although both of those are much further advanced than LLMs acting as a mirror of your interactions pretending to be an entity. It's even debatable if Joi was sapient, or if it can be determined where the story leaves it. Sam certainly was, given the final results, and presumably we know when that happened, although not how.

People becoming attached to chatbots is hardly new, it's just that the bots are a lot more realistic now, especially for people who are vulnerable and want them to be real. Yet more damage that was predictable and yet no rules or safeguards were put in place to restrict these companies in doing what they want, or in how they got to this level.

load more comments (4 replies)
load more comments (2 replies)
[–] WorldsDumbestMan@lemmy.today 25 points 12 hours ago

They might be right about the mental illness part. o4 was fucking marrying everyone. What a hoe.

[–] Korhaka@sopuli.xyz 45 points 13 hours ago (9 children)

This shows you the importance of self hosting. Then your emotional toy will keep working.

[–] Damage@feddit.it 19 points 10 hours ago (1 children)

"Yeah, my partner is trapped on my server..."

[–] neons@lemmy.dbzer0.com 16 points 9 hours ago (1 children)

"I keep my partner in the basement" suddenly gets a new meaning

[–] SkaveRat@discuss.tchncs.de 11 points 8 hours ago (1 children)

"my partner has a big rack"

load more comments (1 replies)
[–] gnutrino@programming.dev 27 points 11 hours ago* (last edited 11 hours ago) (2 children)

It would be the most human thing ever if AI fuckbots are what it finally takes to get people to care about planned obsolescence, right to repair and self-hostability.

[–] Tangent5280@lemmy.world 16 points 11 hours ago

Oh my god, the big horny might be the saviour after all

load more comments (1 replies)
[–] mutant_zz@lemmy.world 9 points 10 hours ago

Everyone is terrified of all the social problems AI is causing, but maybe it will just encourage everyone to learn Linux and we'll achieve utopia.

load more comments (6 replies)
[–] HeyThisIsntTheYMCA@lemmy.world 16 points 11 hours ago

I'm here to help, but I but replace real-life connections. Take care of yourself and keep your heart safe, okay? 💙

Okay even the talking machine was getting skeezed out and it can't think. At least it let him down nicely.

[–] viking@infosec.pub 41 points 13 hours ago (3 children)

"AI husband". WTF. It's good those people get a well needed reality check... Seems like 5.0 is fixing some of its predecessor's shortcomings.

load more comments (3 replies)
[–] Rooskie91@discuss.online 101 points 16 hours ago* (last edited 5 hours ago) (12 children)

This feels like satire. Nobody correct me if it isn't, I don't want to be sad.

[–] pennomi@lemmy.world 92 points 15 hours ago (10 children)

It’s not. There is a small but measurable portion of the population who have gotten romantically attached to AI chatbots.

I wouldn’t worry about it, there have been people like this as long as humans have existed. Like, people who claim to have relationships with ghosts, or fictional characters, etc.

[–] AtariDump@lemmy.world 22 points 12 hours ago (2 children)
[–] elbarto777@lemmy.world 3 points 8 hours ago

This woke something up in me.

load more comments (1 replies)
load more comments (9 replies)
[–] auraithx@lemmy.dbzer0.com 6 points 10 hours ago

It’s not it was about a week ago they ended up bringing the model back after getting hammered in an AMA.

4o would suck right up your arse and talk pure nonsense most people were glad to see the back of it but evidently if you’re a bit thick and/or desperate that’s really appealing.

Look how deluded most people are after a few naive botnets and algorithms that weaponised attention, shits about to get real weird.

load more comments (10 replies)
[–] ThePowerOfGeek@lemmy.world 77 points 16 hours ago (8 children)

This is what happens when you rely on corporate hosted solutions. If these people really must fall in love with an LLM, they should install one on their local PC and then start romancing it.

[–] SGforce@lemmy.ca 66 points 15 hours ago (12 children)

The illusion quickly collapses when you see how the sausage is made

load more comments (12 replies)
load more comments (7 replies)
load more comments
view more: ‹ prev next ›