this post was submitted on 09 Jun 2025
223 points (98.7% liked)
Fuck AI
3065 readers
1206 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's the last line quoted in the post. They talk a lot of fancy talk up front but their entire reasoning for LLMs not being capable of thought boils down to that they're statistical probability machines.
So is the process of human thought.
This line?
Because that sure isn't the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.
You're dehumanizing yourself by comparing yourself to a chatbot. Stop that.
Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.
It's not dehumanizing to recognize that alien intelligence could exist, and it's not dehumanizing to think that we are capable of building synthetic intelligence.
Go to one of these "reasoning" AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)
Then put the "reasoning" side by side and count the contradictions. There's a very good chance that the three explanations are not only different from each other, they're very likely also mutually incompatible.
"Reasoning" LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you'll see some seriously broken links (because LLMs of any kind can't think!)—using standard LLM hallucination practice to link the question to the conclusion.
So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.
If you don't believe me, why not ask one? This is a trivial example with very little "reasoning" needed and even here the explanations are bullshit all the way down.
Note, especially, the final statement it made:
Now I'm absolutely technically declined. Yet even I can figure out that these "reasoning" models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM "decides" if maths are what it needs and will then switch to a maths engine. But if the LLM "decides" it can do it on its own it will. So you'll still get garbage maths out of the machine.