221
Reasoning failures highlighted by Apple research on LLMs
(appleinsider.com)
This is a most excellent place for technology news and articles.
Do we know how human brains reason? Not really... Do we have an abundance of long chains of reasoning we can use as training data?
...no.
So we don't have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.
But also - even if we did, that wouldn't produce 'thought' any more than a book about thought can produce thought.
Thinking is relational. It requires an internal self awareness. We can't discuss that in text so much that a book is suddenly conscious.
This is the idea that"Sentience can't come from semantics"... More is needed than that.
i like your comment here, just one reflection :
i think it's like the chicken and the egg : they both come together ... one could try to argue that self-awareness comes from thinking in the fashion of : "i think so i am"