this post was submitted on 10 Jun 2025
74 points (94.0% liked)
Programming
20881 readers
92 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The burden of proof is on those who say that LLMs do think.
I asked for your definition, I cannot prove something if we do not agree on a definition first.
You also missread what I said, I did not said AI were thinking.
The burden of proof is on the one who made an affirmation.
I'm not the one who made an affirmation which field experts doesn't know the answer.
But depending of your definition of thinking, some can be answered.
I don't think y'all are disagreeing but maybe this sentence is somewhat confusing:
Maybe the "doesnt" shouldn't be there.
No it is here because that's what they claim.
Nobody yet know how it work, we don't know how LLMs process information.
Anyone who claim it really think, or it isn't thinking, is believing, this is not something the current ML field know.
Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.
And the brain is made out of neurons that sends electric signals between them and operate muscles.
That doesnt explain how the brain think.
It allows us to conclude that an LLM doesn't "think" about what it is saying. Based on the mechanics, the LLM doesn't even know it's a participant in the conversation.
By that logic we also conclude that the human brain doesn't "think" about what it is saying.
That does not follow. I can't speak for you, but I can tell if I'm involved in a conversation or not.
Consciousness may be an illusion born from the ability of self reflection.
Also, like i showed before, you may act before consciously taking the decision of it.
https://en.m.wikipedia.org/wiki/Neuroscience_of_free_will
Theses study with the one presented by cgpgray, indicate that maybe we do stuff then we come up with a reasonable explanation after.
And how do you know LLMs can't tell that they are involved in a conversation?
Unless you think there is something non-computational in the human brain, then you must accept that computers are - in theory - capable of thinking. With the right software and sufficiently powerful hardware.
Given that truth (which I think you can only avoid through religion or quantum quackery), you can't just say "it's only maths; it can't be thinking" because we know that maths can think.
Do LLMs "think"? The definition of "think" is wooly enough and we understand them little enough that it's quite an assertion to say that they definitely don't.
It has no memory, for one. What makes you think that it does know its in a conversation?
It has very short term memory in the form of it's token context. Especially with something like Meta's Coconut.
I don't really. Yet. But I also don't think that it is fundamentally impossible for LLMs to think, like you seem to. I also don't think the definition of the word "think" is so narrow that it requires that level of self-awareness. Do you think a mouse is really aware it is a mouse? What about a spider?
How did you concluded that from theses 2 messages.