262

Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

you are viewing a single comment's thread
view the rest of the comments
[-] fidodo@lemm.ee 6 points 1 year ago

I view it by building up to the technology.

Is a book sentient? It is capable of providing recorded knowledge in the form of sequence of symbols on a specific subject at a level of proficiency far above the reader's. But no, it's static information that originated from a human.

Is a library sentient? It allows for systematic retrieval of knowledge on a vast amount of subjects far beyond what any human is capable of knowing. But no, it's just a static categorization of documents curated by a human.

Is a search engine sentient? It allows for automatic retrieval of highly relevant knowledge based on a query from a human. But no, it's just token based pattern matching to find similar documents.

So why is an LLM suddenly sentient? It's able to produce highly relevant sequences of words based on recorded knowledge specifically tailored to the sequences of words around it, but it's just a probability engine to find highly relevant token sequences that match the context around it.

The underlying mechanism simply has no concept of a world view or a mental model of the metaphysical world around. It's basically a magic book that allows you to retrieve information from any document ever written in a way tailored to a document you wrote.

[-] uroybd@lemmy.world 5 points 1 year ago

Yes. LLMs generate texts. They don't use language. Using a language requires an understanding of the subject one is going to express. LLMs don't understand.

[-] Spzi@lemm.ee 2 points 1 year ago

I guess you're right, but find this a very interesting point nevertheless.

How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

For the sake of the comparison, we should talk about the presumed intelligence of other people, not our ("my") own.

[-] uroybd@lemmy.world 0 points 1 year ago

In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.

We can't compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.

load more comments (5 replies)
this post was submitted on 06 Aug 2023
262 points (92.3% liked)

Technology

58137 readers
5860 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS