this post was submitted on 28 Aug 2025
489 points (99.8% liked)

Technology

74545 readers
5298 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Perspectivist@feddit.uk 4 points 8 hours ago (1 children)

You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.

But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.

So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.

[–] iglou@programming.dev -1 points 6 hours ago

Of course the "understanding" of an LLM is limited. Because the entire technology is new, and it's far from being anywhere close to being able to understand to the level of a human.

But I disagree with your understanding of how an LLM works. At its lower level, it's a bunch on connected artifical neurons, not that different from a human brain. Now please don't read this as me saying it's as good as a human brain. It's definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.

If it was just statistics, it wouldn't be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.

So it does understand, to a limited extent. Just like a human, it won't understand what it hasn't been exposed to. And unlike a human, it is exposed to a very limited set of data.

You're putting the difference between a human's "understanding" and an LLM's "understanding" in the meaning of the word "understanding", which is just a shortcut to say that they can't be compared. The actual difference is in the scope of understanding.

A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.