262
Language Is a Poor Heuristic for Intelligence
(karawynn.substack.com)
This is a most excellent place for technology news and articles.
I view it by building up to the technology.
Is a book sentient? It is capable of providing recorded knowledge in the form of sequence of symbols on a specific subject at a level of proficiency far above the reader's. But no, it's static information that originated from a human.
Is a library sentient? It allows for systematic retrieval of knowledge on a vast amount of subjects far beyond what any human is capable of knowing. But no, it's just a static categorization of documents curated by a human.
Is a search engine sentient? It allows for automatic retrieval of highly relevant knowledge based on a query from a human. But no, it's just token based pattern matching to find similar documents.
So why is an LLM suddenly sentient? It's able to produce highly relevant sequences of words based on recorded knowledge specifically tailored to the sequences of words around it, but it's just a probability engine to find highly relevant token sequences that match the context around it.
The underlying mechanism simply has no concept of a world view or a mental model of the metaphysical world around. It's basically a magic book that allows you to retrieve information from any document ever written in a way tailored to a document you wrote.
Yes. LLMs generate texts. They don't use language. Using a language requires an understanding of the subject one is going to express. LLMs don't understand.
I guess you're right, but find this a very interesting point nevertheless.
How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?
For the sake of the comparison, we should talk about the presumed intelligence of other people, not our ("my") own.
In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.
We can't compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.