this post was submitted on 08 Aug 2025
11 points (76.2% liked)

Thoughtful Discussion

353 readers
1 users here now

Welcome

Open discussions and thoughts. Make anything into a discussion!

Leaving a comment explaining why you found a link interesting is optional, but encouraged!

Rules

  1. Follow the rules of discuss.online
  2. No porn
  3. No self-promotion
  4. Don't downvote because you disagree. Doing so repeatedly may result in a ban

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Carrot@lemmy.today 2 points 1 month ago (1 children)

I've gone down the recursive definitions rabbit hole, and while it's way to much to chart out here, the word that all words like "intelligence" and "thought" all eventually point to is "sentience." And while the definition of sentience also ends up being largely circular as well, we've at least reached a term that has been used to make modern legislation based on actual scientific study and is something widely accepted as provable, which LLMs don't meet. One of the most common tools for determining sentience is reactionary vs complex actions.

I disagree that an if/else statement is the most basic element of intelligence, I was just playing into your hypothetical. An if/else is purely reactionary, which doesn't actually give any signs of intelligence at all. A dead, decapited snake head will still bite something that enters its mouth, but there wasn't any intelligent choice behind it, it was also purely reactionary.

I also think that a bit is information in the same way that Shakespeare's complete works is information, just at a much smaller scale. A bit on its own means nothing, but given context, say, "a 1 means yes and a 0 means no to the question 'are you a republican?'" With that context it actually contains a bunch of information. Same with Shakespeare. Without the context of culture, emotion, and a language to describe those things, Shakespeare's works are equally useless as a bit without context.

I read the study you listed, and I also disagree that "having a world model" is a good definition of awareness/conciousness, and I disagree that this paper proves that LLMs have a world view altogether. To be clear, I have taken multiple universtiy classes on ANNs (LLMs weren't a thing when I was in university) and have taken multiple classes (put on by my employer) on LLMs, so I'm pretty familiar how they work under the hood. Whether they are trained to win, or are trained with what a valid move looks like, the data they use to store that training looks the same. It is some number of chained, weighted connections that represent what the next best token(s) (or in OthelloGPT's case, the next valid move) might look like. This is not intelligence, this is again reactionary. OthelloGPT was trained on 20,000,000+ games of othello, all of which contained only valid moves. Of course this means that OthelloGPT will have weighted connections that will more likely lead to valid moves. Yes, it will hallucinate things when given a scenario it hasn't seen before, and that hallucination will almost always look like a valid move, because that's all it has been trained on. (I say almost here because it still made invalid moves, although rarely) I don't even think this proves it has a world view, it just knows what weights lead to a valid move in Othello, given a prior list of moves. The probes they use are simply modifying the character sequence that OthelloGPT is responding to, or the weights they use to determine which token comes next. There is no concept of "board state" being remembered, as with all LLMs it simply returns the most likely next token sequence that would follow the previously given token sequences, which in this case is previous moves.

Reactionary actions aren't enough to understand if something is capable of thought. Like my snake example above, there are many living and not living things that will make actions without thought, just simple reactions to outside stimulus. LLMs are in this category, as they are incapable of intelligence. They only can regurgitate a response that should most likely follow the question, nothing more. There is no path for modern LLMs to have intelligence, as the way they are trained fundamentally doesn't lead to intelligence.

[–] m_f@discuss.online 1 points 1 month ago (1 children)

Do you have links about using sentience and/or reactionary as the base definition? I can't really find a definition of sentience that doesn't end up circular again, but I'd be interested in reading more. Whether something is reactionary or not seems orthogonal to me in regards to intelligence. A black box that does nothing until prompted with something like "Solve the Collatz conjecture, please" seems plenty intelligent.

[–] Carrot@lemmy.today 2 points 1 month ago

I picked sentience as the culmination of the definitions of Intelligence, awareness, etc. as it ends up being circular with the definitions of those terms, and has a concrete definition that has been widely accepted to be provable by society and science.

I would argue otherwise, as a black box that I have coded an algorithm to prove the collatz conjecture for, actually has no intelligence whatsoever, as it doesn't do anything intelligent, it just runs through a set of steps, completely without awareness that it is even doing anything at all. It may seem intelligent to you, because you don't understand what it does, but at the end of the day it just runs through instructions.

I wouldn't call the snake head responding to stimulus intelligent, as it is not using any form of thought at all to react, it's purely mechanical. In the same way, a program that has been written that solves a problem is mechanical, it itself doesn't solve any problem, it simply runs through, or reacts to, a set of given instructions.