this post was submitted on 10 Sep 2025
941 points (99.1% liked)

Fuck AI

4080 readers
973 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Laser@feddit.org 2 points 1 week ago (1 children)

It's a weird case. As the paper says, this is inherent to LLMs. They have no concept of true and false, and rather produce probabilistic word streams. So is producing an untrue statement an error? Not really. Given these inputs (training data, model parameters and quiet), it's correct. But it's also definitely not a "hallucination", that's a disingenuous bogus term.

The problem however is that we pretend these probabilistic language approaches are somehow a general fit for the programs they're put in place to solve.

[โ€“] aesthelete@lemmy.world 3 points 1 week ago

If the system (regardless of the underlying architecture and technical components) is intended to produce a correct result, and instead produces something that is absurdly incorrect, that is an error.

Our knowledge about how the system works or its inherent design flaws does nothing to alter that basic definition in my opinion.