this post was submitted on 01 Sep 2025
182 points (97.9% liked)

Fuck AI

4015 readers
272 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

I’ll highlight this:

At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.

Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”

you are viewing a single comment's thread
view the rest of the comments
[–] Stovetop@lemmy.world 11 points 1 week ago* (last edited 1 week ago) (1 children)

Not to mention that just about every model is designed to basically validate everything you say and make up whatever facts it wants to support it. If you tell an LLM that your neighbors are lizard people who want to steal all your copper, it'll agree easily and suggest ways to take matters into your own hands with minimal prodding.

[–] partial_accumen@lemmy.world 0 points 1 week ago (1 children)

Not to mention that just about every model is designed to basically validate everything you say

Except they're not. LLMs are not that smart. They frequently end up doing that but they aren't designed to do it. They only guess the next word in a sentence, then guess the word after that, etc. So if its been fed conspiracy garbage as training data, some of the most probable words or terms in the next sentence will be similar conspiracy garbage words and phrases.

So they aren't designed to do conspiracy stuff, they're just given training data that contains that (along with lots of other unrelated subjects and sources).

and make up whatever facts it wants to support it.

That's a big part of the "generative" of "generative AI". Generative AI is LLMs and AI image generation models. They are made to create something that didn't exist before.

[–] Stovetop@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

What I mean is that the popular LLMs can be fed however much training data is possible to cram in there, but a model like ChatGPT will typically defer to you if you tell it that it's wrong or that you're right. If you present yourself as a meteorological expert and then tell it that they sky is red, for example, it'll agree without much protest.

These models are all built to act like assistants, so the ultimate goal is to make sure the user feels validated and satisfied with the results they provide. It's not that they're designed to do conspiracy stuff, but they will gladly reinforce any paranoias/disinformation when challenged, or simply if they are pushed to do so.