this post was submitted on 16 Oct 2025
69 points (97.3% liked)

TechTakes

2253 readers
239 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
all 29 comments
sorted by: hot top controversial new old
[–] MourningDove@lemmy.zip 1 points 9 minutes ago

There’s a thing called “AI Boyftiend”? That’s fucking embarrassing.

[–] corbin@awful.systems 12 points 1 day ago (2 children)

I tried to substantiate the claim that multiple users from that subreddit are self-hosting. Reading the top 120 submissions, I did find several folks moving to Grok (1, 2, 3) and Mistral's Le Chat (1, 2, 3). Of those, only the last two appear to actually have discussion about self-hosting; they are discussing Mistral's open models like Mistral-7B-Instruct which indeed can be run locally. For comparison, I also checked the subreddit /r/LocalLLaMA, which is the biggest subreddit for self-hosting language models using tools like llama.cpp or Ollama; there's zero cross-posts from /r/MyBoyfriendIsAI or posts clearly about AI boyfriends in the top 120 submissions there. That is, I found no posts that combine tools like llama.cpp or Ollama and models like Mistral-7B-Instruct into a single build-your-own-AI-boyfriend guide. Amusingly, one post gives instructions for how to ask ChatGPT about how to set up Ollama.

Also, I did find multiple gay and lesbian folks; this is not a sub solely for women or heterosexuals. Not that any of our regular commenters were being jerks about this, but it's worth noting.

What's more interesting to me are the emergent beliefs and descriptors in this community. They have a concept of "being rerouted;" they see prompted agents as a sort of nexus of interconnected components, and the "routing" between those components controls the bot's personality. Similarly, they see interactions with OpenAI's safety guardrails as interactions with a safety personality, and some users have come to prefer it over the personality generated by ChatGPT-4o or ChatGPT-5. Finally, I notice that many folks are talking about bot personalities as portable between totally different models and chat products, which is not a real thing; it seems like users are overly focused on specific memorialized events which linger in the chat interface's history, and the presence of those events along with a "you are my perfect boyfriend" sort of prompt is enough to ~~trigger a delusional episode~~ summon the perfect boyfriend for a lovely evening.

(There's some remarkable bertology in there, too. One woman's got a girlfriend chatbot fairly deep into a degenerated distribution such that most of its emitted tokens are asterisks, but because of the Markdown rendering in the chatbot interface, the bot appears to shift between italic and bold text and most asterisks aren't rendered. It's a cool example of a productive low-energy distribution.)

[–] dgerard@awful.systems 2 points 4 hours ago* (last edited 4 hours ago)

I did see at least a few, which is why I said that, funnily enough.

[–] mistermodal@lemmy.ml 3 points 7 hours ago (1 children)

I like how you are doing anti-disinformation-style subreddit analysis but it's solely to figure out how people are trying to fuck their computer.

[–] SoftestSapphic@lemmy.world 3 points 6 hours ago

The detail, the dedication

Imagine the problems we would solve if these types of people were given government grants 😭

[–] sleepundertheleaves@infosec.pub 18 points 1 day ago* (last edited 1 day ago)

God, this is starting to remind me of the opioid crisis. Big business gets its users addicted to their product, gets too much bad press over it, cuts the addicts off, so the addicts turn to more dangerous sources to get their fix.

I suspect we're going to see not just more suicides but more "lone wolf" attacks as mentally unstable people self-radicalize with guardrail-free self-hosted AI.

And I hope AI psychosis does less damage to the country than opioid addiction has.

[–] xxce2AAb@feddit.dk 23 points 1 day ago (2 children)

This... is not going to end well. For anybody.

[–] gerikson@awful.systems 18 points 1 day ago (1 children)

Except the people selling expensive PCs.

[–] xxce2AAb@feddit.dk 9 points 1 day ago

Initially, yeah.

[–] Tollana1234567@lemmy.today 4 points 21 hours ago (1 children)

its the futurama episode where the guy only falls in love with a robot for the rest of thier lives.

[–] AntiBullyRanger@ani.social 5 points 22 hours ago

“Supertoys Last All Summer Long” was an installation. Brian Wilson Aldiss wasn't prophecizing, he was witnessing the preparation.

[–] TropicalDingdong@lemmy.world 13 points 1 day ago* (last edited 1 day ago)

Just like...

This feels like one of those run-away feed backs, where like, if you start down the slippery slope of just non-stop positive reinforcement and validation of every behavior from a chatbot... like, you are going to go like..hard maladaptive behavior fast.

[–] BlueMonday1984@awful.systems 8 points 1 day ago

I tried to come up with some kind of fucked-up joke to take the edge off, but I can't think up anything good. What the actual fuck.

[–] bdonvr@thelemmy.club 3 points 22 hours ago

Great example of "better doesn't mean good"

[–] fullsquare@awful.systems 7 points 1 day ago

MyBoyfriendIsAI are non-techies who are learning about computers from scratch, just so Sam can’t rug-pull them.

Buterin jumpscare