this post was submitted on 09 Jun 2025
140 points (97.9% liked)
Fuck AI
3052 readers
1135 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Humans clearly aren't just probability engines.
Saying 'clearly' in this context is a thought terminating expression, not reasoning.
Okay, but LLMs don't have thoughts that can be terminated, so that's just another way they aren't intelligent. Saying "clearly" for them would just be a way to continue the pattern, they wouldn't use it the way I did to express how self evident and insultingly obvious it is.
AI isn't impossible, but LLMs are not intelligent and you need to stop dehumanizing yourself to argue for their intelligence.
So? As you said, nothing says that they couldn't eventually be part of an intelligence, but the reasoning presented in the article is basically just 'theyre made of math so they could never be intelligent'.
You need to stop limiting yourself to thinking of all intelligence worthy of consideration as having to be exactly the same as humans. That's literally one of the core lessons of Star Trek and basically every single BBC documentary. Are LLMs intelligent? No. Could we make synthetic intelligence worthy of consideration? All evidence points to eventually yes.
The article is about LLMs specifically? And it's arguing that intelligence can't exist without subjectivity, the qualia of experiential data. These LLM text generators are being assigned intelligence they do not have because we have a tendency to assume there is a mind behind the text.
This is not about AI being conceptually impossible they're "made of math". I'm not even sure where you got that? Where did that quote come from? It's not in the link, or the Atlantic article.
It's the last line quoted in the post. They talk a lot of fancy talk up front but their entire reasoning for LLMs not being capable of thought boils down to that they're statistical probability machines.
So is the process of human thought.
This line?
Because that sure isn't the process of human thought! We have reasoning, logical deductions, experiential qualia, subjectivity. Intelligence is so much more than just making statistically informed guesses, we can actually prove things and uncover truths.
You're dehumanizing yourself by comparing yourself to a chatbot. Stop that.
Are you sure you're not talking to a chatbot?
Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.
It's not dehumanizing to recognize that alien intelligence could exist, and it's not dehumanizing to think that we are capable of building synthetic intelligence.
I feel you're wasting your time here. Some people seem to be under the impression it's the year 1990 or 1950 and we're talking about markov chain chatbots. The stochastic parrot argument would certainly apply there. But we're talking about something else here.
And it's also a fairly common misconception that AI somehow has to be intelligent in the same way a human is. And by using the same methods. But it really doesn't work that way. That's why we put the word "Artificial" in front of "Intelligence".
But this take gets repeated over and over again and I don't really know why we need to argue about how maths and statistics are a part of our world, how language and perception work and who is dehumanizing themselves... The scientific approach is to define intelligence, come up with some means of measuring it, and then do it... And that's what we've done. We can get rid of the perception part of language. We can measure how "intelligent" entities can memorize and recall facts, combine them, transfer and apply knowledge... That's not really a secret... I mean obviously it seems to be misunderstood or hyped or whatever by lots of people. But we also (in theory) know some of the facts about AI and what it can and can not do and how that relates to the vague concept of intelligence.
Given the inherently simplistic nature of a community called 'fuck ai', I assume what I'm saying will be unpopular, but there's always some people genuinely open to reason and rational discussion.
Yes and newer models arent just raw LLMs, but specifically models designed to reason and deduct and start chaining LLMs with other types of models.
It's not dehumanizing to recognize that alien intelligence could exist, and it's not dehumanizing to think that we are capable of building synthetic intelligence.
Likewise, reducing humanity to "probability gadgets".
So what do you think we run on? Magic and souls?
It's called understanding science and biology. When you drill it down, there's nothing down there that's not physical.
If that's the case, there's no reason it couldn't theoretically be modelled and simulated.
This would be like all the technical workings for nuclear bombs being published and rather than focusing on their resultant harms and misuses, that you instead stuck your head in the sand and said 'nuh uh, no way an atom can make a big explosion, don't you know how small atoms are?'
I think that if the human mind was a simple "probability gadget" then we'd have discovered and implemented the algorithm of consciousness in human-level AI 30 years ago.
And you're basing that on all the LLMs that existed 30 years ago?
I'm basing that on the amount of compute power available then.
The article posits that LLMs are just fancy probability machines which is what I was responding to. I'm positing that human intelligence is, while more advanced than current LLMs, still just a probability machine, and thus presumably a more advanced probability machine than an LLM.
So why would you think that human intelligence wouldve existed 30 years ago if LLMs couldn't?
The problem with your line of reasoning is that "probability machines" are Turing-complete, and could therefore be used to emulate any computable processes. The statement is literally equivalent to "the mind is a computer", which is itself a thought-terminating clichè that ignores the actual complexities involved.
Nobody's arguing that simulated or emulated consciousness isn't possible, just that if it were as simple as you're making it out to be then we'd have figured it out decades ago.
But I'm not. I have literally stated in every comment that human intelligence is more advanced than LLMs, but that both are just statistical machines.
There's literally no reason to think that would have been possible decades ago based on this line of reasoning.
Again, literally all machines can be expressed in the form of statistics.
You might as well be saying that both LLMs and human intelligence exist because that's all that can be concluded from the equivalence you are trying to draw.
You should read up on modern philosophy. P-zombies and stuff like that. Very interesting.