this post was submitted on 10 Jun 2025
74 points (94.0% liked)
Programming
20830 readers
172 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The general way it is trained is known, specifics and technics are not known, but the public do know how one of the flagship model was trained, the training process of deepseek r1 was documented in their research paper: https://arxiv.org/pdf/2501.12948
Which I did read a chunk when it was released.
The LLMs have multiple way to do additions, i'll showcase two as an example, I asked ChatGPT 4.1 to solve a big addition. Here it's output:
You can notice, the whole reasoning is correct, but it wrote the wrong response, I can expand more on this if you want (I do some research on it on my free time)
This reasoning of decomposing the addition was of course learned from training data.
Now, the trigonometry used to calculate additions that i talked earlier, is not for writing a "reasoning" but when it try to write the correct response. It was created by the backpropagation trying to find a local minimum that can solve additions in order to more accuratly predict the next token.
Artificial neurons were made to behave like neurons: https://en.wikipedia.org/wiki/Artificial_neuron
And the terminology used, is neurons, cf the paper i sent earlier about how they do additions: https://arxiv.org/pdf/2502.00873
I don't doubt that it can perform addition in multiple ways. I would go as far as saying it can probably attempt to perform addition in more ways than the average person as it has probably been trained on a bunch of math. Can it perform it correctly? Sometimes. That's ok, people make mistakes all the time too. I don't take away from LLMs just because they make mistakes. The ability to do math in multiple ways is not evidence of thinking though. That is evidence that it's been trained on at least a fair bit of math. I would say if you train it on a lot of math, it will attempt to do a lot of math. That's not thinking, that's just increasing the weighting on tokens related to math. If you were to train an LLM on nothing but math and texts about math, then asked it an art question, it would respond somewhat nonsensically with math. That's not thinking, that's just choosing the statistically most likely next token.
I had no idea about artificial neurons, TIL. I suppose that makes "neural networks" make more sense. In my readings on ML they always seemed to go straight to the tensor and overlook the neuron. They would go over the functions to help populate the weights but never used that term. Now I know.
I've been re reading my response and my bad, I meant "artificial neurons were inspired from neurons", not to behave like, they have little in common.
If you asked an human that speak german and nothing else, a question in english, it would also respond in german (that they cant understand you).
LLMs sometimes (not often enough) do respond they don't know.