this post was submitted on 18 Apr 2025
279 points (98.6% liked)

ChatGPT

9613 readers
31 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 2 years ago
MODERATORS
 

This is both upsetting and sad.

What's sad is that I spend about $200/month on professional therapy, which is on the low end. Not everyone has those resources. So I understand where they're coming from.

What's upsetting is that this user TRUSTED ChatGPT to follow through on the advice without critically challenging it.

Even more upsetting is this user admitted to their mistake. I guarantee you that there are thousands like OP who wasn't brave enough to admit it, and are probably to this day, still using ChatGPT as a support system.

Source: https://www.reddit.com/r/ChatGPT/comments/1k1st3q/i_feel_so_betrayed_a_warning/

you are viewing a single comment's thread
view the rest of the comments
[–] AFKBRBChocolate@lemmy.world 33 points 3 weeks ago (2 children)

People really misunderstand what LLMs (Large Language Models) are. That last word is key: they're models. They take in reams of text from all across the web and make a model of what a conversation looks like (or what code looks like, etc.). When you ask it a question, it gives you a response that looks right based on what it took in.

Looking at how they do with math questions makes it click for a lot of people. You can ask an LLM for a mathematical proof, and it will give you one. If the equation you asked it about is commonly found online, it might be right because its database/model has that exact thing multiple times, so it can just regurgitate it. But if not, it's going to give you a proof that looks like the right kind of thing, but it's very unlikely to be correct. It doesn't understand math - it doesn't understand anything - it just uses it's model to give you something that looks like the right kind of response.

If you take the above paragraph and replace the math stuff with therapy stuff, it's exactly the same (except therapy is less exacting than math, so it's less clear that the answers are wrong).

Oh and since they don't actually understand anything (they're just software), they don't know if something is a joke unless it's labeled as one. So when a redditor made a joke about using glue in pizza sauce to help it stick to the pizza, and that comment got a giant amount of upvotes, the LLMs took that to mean that's a useful thing to incorporate into responses about making pizza, which is why that viral response happened.

[–] mbtrhcs@feddit.org 19 points 3 weeks ago (1 children)

I have a compsci background and I've been following language models since the days of the original GPT and BERT. Still, the weird and distinct behavior of LLMs hasn't really clicked for me until recently when I really thought about what "model" meant, as you described. It's simulating what a conversation with another person might look like structurally, and it can do so with impressive detail. But there is no depth to it, so logic and fact-checking are completely foreign concepts in this realm.

When looking at it this way, it also suddenly becomes very clear why people frustratedly telling LLMs things such as "that didn't work, fix it" is so unproductive and meaningless: what would follow that kind of prompt in a human-to-human conversation? Structurally, an answer that looks very similar! Therefore the LLM will once more produce a structurally similar answer, but there is literally no reason why it would be any more "correct" than the prior output.

[–] AFKBRBChocolate@lemmy.world 12 points 3 weeks ago

That's right, you have it exactly. When the prompt is that the prior output is wrong, the program is supposed to apologize and reprocess with a different output, but it uses the same process.

[–] Honytawk@lemmy.zip 9 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

LLMs are our attempts to teach computers how to speak, like how we would a baby.

They can string together sentences, but the cognitive ability just isn't there yet.

You wouldn't have a baby as your therapist either. But you can use it as creative input to inspire new ideas for yourself.

[–] AFKBRBChocolate@lemmy.world 10 points 3 weeks ago* (last edited 3 weeks ago)

We'd be better off talking about AI if no one used words like intelligence, cognition, think, understand, know, learn, etc. They don't do any of those things.