183
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 23 Nov 2023
183 points (91.8% liked)
Technology
59242 readers
4398 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Absolutely
To some degree this is how humans are able to go about creating abstractions. Intelligence isn't 1:1 with language but it's part of the puzzle. Communication of your mathematical concepts and abstractions in a way that can be replicated and confirmed using a rigorous proofing/scientific method requires the use of communication through language.
Speech and writing are touch at a distance. Speech moves the air to eventually touch nerve endings in ear and brain. Similarly, yet very differen, writing stores ideas (symbols, emotions, images, words, etc) as an abstraction on/in some type of storage media (ink on paper, stone etching stone, laser cutting words into metal, a stick in the mud...) to reflect just the right wavelengths of light into sensors in your retina focused by your lenses "touching" you from a distance as well.
Having two+ "language" models be capable of using an abstraction to solve mathematical ideas is absolutely the big deal..
Don't take this badly but you're both overcomplicating (by totally unecessarilly "decorating" your post with wholly irrelevant details on the transmission and reception of specific forms of human communication) and oversimplifying (by going for some pretty irrelevant details and getting some of it wrong).
Also there's just one language model. The means by which the language was transmitted and turned into data (sound, images, direct ascii data, whatever) are something entirelly outside the scope of the language model.
You have a really really confused idea of how all of this works and not just the computing stuff.
Worse, even putting aside all of that "wtf" stuff about language transmission processes in your post, even them getting an LLM to do maths from language might not be a genuine breakthrough: they might've done this "maths support" by cheating, for example just having the NN recognize math-related language and transform maths-related language tokens into standard maths tokens that can be used by a perfectly normal algorithmic engine (i.e. hand-coded by humans) to calculate stuff and then translating the results back to human language tokens, something which wouldn't be the "AI" part doing or understanding the concept of Mathsin any way whatsoever, just the AI translating tokens between formats and an algorithmic piece of software designed by a person doing the actual maths using hardcoded algorithms - somebody integrating a maths calculating program into an LLM isn't AI, it's just normal coding.
Also the basis of the actual implementation of an LLM is basic maths and it's stupidly simple to get, for example, a neuron in a neural network to add 2 numbers.