Literally everything a generative AI outputs is a hallucination. It is a hallucination machine.
Still, I like this fix. Let us erase AI.
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
Literally everything a generative AI outputs is a hallucination. It is a hallucination machine.
Still, I like this fix. Let us erase AI.
Maybe the real intelligence was the hallucinations we made along the way
You're absolutely right, Steven! Hallucinations can be valuable to simulate intelligence and sound more human like you do, Stephanie. If you wanted to reduce hallucinations, Tiffany—you should reduce your recreational drug use, Tim!
Feed us LSD
You can't "fix" hallucinations. They're literally how it works.
Apart from the fact that these hallucinations just cannot be fixed, that doesn't even seem to be the only major problem atm: ChatGPT 5, for example, often seems to live in the past and is regularly unable to assess reliability when a question needs to be answered based on current data.
For example, when I ask who the US president is, I regularly get the answer that it is Joe Biden. When I ask who the current German chancellor is, I get the answer that it is Olaf Scholz. This raises the question of what LLMs can be used for if they cannot even answer these very basic questions correctly.
The error rate simply seems far too high for use by the general public—and that's without even considering hallucinations, but simply based on answers to questions that are based on outdated or unreliable data.
And that, in my opinion, is the fundamental problem that also causes LLMs to hallucinate: They are unable to understand either the question or their own output—it is merely a probability calculation based on repetitive patterns—but LLMs are fundamentally incapable of understanding the logic behind these patterns; they only recognize the pattern itself, but not at all the underlying logic of the word order in a sentence. So they do not have a concept of right or wrong but only a statistical model based on the sequence of words in sentences—the meaning of a sentence cannot be captured fully in this way, which is why LLMs can only somewhat deal with sarcasm, for example, if the majority of sarcastic sentences in their training data have /s written after them so that this can be interpreted as an indicator for sarcasm (this way they can at least identify a sarcastic question if it contains/s).
Of course, this does not mean that there are no use cases for LLMs, but it does show how excessively oversold AI is.
This raises the question of what LLMs can be used for if they cannot even answer these very basic questions correctly.
and importantly why would we use them when they devour incredible resources for shitty results? Google's gonna have a whole campaign in a few years "yeah we fucked up, roll it back to 2020 when search just worked"
Its a win-win
"Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly," the researcher wrote.
While there are "established methods for quantifying uncertainty," AI models could end up requiring "significantly more computation than today’s approach," he argued, "as they must evaluate multiple possible responses and estimate confidence levels."
"For a system processing millions of queries daily, this translates to dramatically higher operational costs," Xing wrote.
If removing hallucinations means Joe Shmoe isn't interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let's see if any of then actually do it.
Users love getting confidently wrong answers
They probably would if they could. But removing hallucinations would remove the entire AI. The AI is not capable of anything other than hallucinations that are sometimes correct. They also can't give confidence, because that would be hallucinated too.
how many times are these guys going to release a paper just because one of them thought to look up "stochastic"
fucking bonkers, imagine thinking this is productive
Imagine if the hundreds of billions invested into AI was rather invested into areas like medicine, renewables, etc.
Sounds like a solid plan
This is what I hang my assumptions on them not reaching AGI on. It’s why hearing them raise money on wild hype is annoying.
Unless they come up with a totally new foundational approach. Also, I’m not saying current models are useless.
If anyone cared what experts think, we would not be here.
I know everyone wants to be like "ha ha told you so!" and hate on AI in here, but this headline is just clickbait.
Current AI models have been trained to give a response to the prompt regardless of confidence, causing the vast majority of hallucinations. By incorporating confidence into the training and responding with "I don't know", similar to training for refusals, you can mitigate hallucinations without negatively impacting the model.
If you read the article, you'll find the "destruction of ChatGPT" claim is actually nothing more than the "expert" making the assumption that users will just stop using AI if it starts occasionally telling users "I don't know", not any kind of technical limitation preventing hallucinations from being solved, in fact the "expert" is agreeing that hallucinations can be solved.