this post was submitted on 04 Apr 2025
328 points (89.0% liked)
Technology
68348 readers
4476 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So it does the math in its head and gives the correct answer and copies the answersheet from the teachers book into the "show your work" section. Pretty much what i would have done as a kid if i could have, instead i had to fight them and take a hit to my score for not showing my work.
Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you're gonna say, and then just output the next token necessary to continue that sentence. It's going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that's something I felt was kinda obvious these models must be doing on one level or another.
I'd be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the "thinking" they have already done for previous tokens
I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.
You write a line to start with
“I’m an AI and I think differentially”
Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)
Then you try them out and see what clever shit you could come up with:
Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)
You get the idea.
My favourite part of the day: commenting LLMentalist under AI articles.
Is that a weird method of doing math?
I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. "Well it's more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it's probably in the 900s. Two times 13 is 26, so if you add that to the 910 it's probably 936, but I should check that in a calculator."
Do you guys not do that? Is that a me thing?
But you wouldn't multiply, say, 74*14 to get the answer.
Not, but I'd do 75*10 + 75*4, then subtract the extra.
The LLM method of doing it with multiple numbers without proper interpolation though makes it extra weird
I might. Then I can subtract 74 to get 74*14, and subtract 28 to get 72*13.
I don't generally do that to 'weird' numbers, I usually get closer to multiples of 5, 9, 10, or 11.
But a computer stores information differently. Perhaps it moves closer to numbers with simpler binary addresses.
How I'd do it is basically
72 * (10+3)
(72 * 10) + (72 * 3)
(720) + (3*(70+2))
(720) + (210+6)
(720) + (216)
936
Basically I break the numbers apart into easier chunks and then add them together.
This is what I do, except I would add 700 and 236 at the end.
Well except I would probably add 700 and 116 or something, because my working memory fucking sucks and my brain drops digits very easily when there's more than 1
I think what's wild about it is that it really is surprisingly similar to how we actually think. It's very different from how a computer (calculator) would calculate it.
So it's not a strange method for humans but that's what makes it so fascinating, no?
I mean neural networks are modeled after biological neurons/brains after all. Kind of makes sense...
Yes, agreed. And calculators are essentially tabulators, and operate almost just like a skilled person using an abacus.
We shouldn't really be surprised because we designed these machines and programs based on our own human experiences and prior solutions to problems. It's still neat though.
That's what's fascinating about how it does language in general.
The article is interesting in both the ways in which things are similar and the ways they're different. The rough approximation thing isn't that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It's a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.
And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.
I wouldn't even attempt that in my head.
I can't keep track of things and then recall them later for the final result.
Pen and paper maths I'm pretty decent at, but ask me to calculate anything in my head and it's anyone's guess if I remembered to carry the 1 or not. Ever since learning about aphantasia I'm wondering if the lack of being able to visually store values has something to do with it.
I can visually store values and I still struggle. :(
Here's some anecdotal evidence. Until I was 12 or 13, I could do absurdly complex arithmetical calculations in my head. My memory of it was of visualizing intermediate calculations as if they were on a screen in my head. I'd close my eyes to minimize distracting external stimuli. I'd get pocket money because my dad would get his friends to bet on whether I could correctly multiply two 7-digit phone numbers, and when I won, which I always did, he'd give the money to me. He had an old-school electromechanical calculator he'd use to check the results.
Neither of my parents and none of my many siblings had this ability.
I was able to use a similar visualization technique to memorize long passages of music and text. That stayed with me post-puberty, though again at a lesser extent. I've also been able to learn languages more quickly than most.
Once puberty kicked in, my ability to visualize declined significantly, though to compensate, I learned some mental arithmetics tricks that I still use now. I was able to get an MS in mathematics without much effort, since that relied on higher-level reasoning and not all that much on powerful memory or visualization. I didn't pursue a Ph.D. due to lack of money but I think I could have gotten one (though I despise academic politics).
So I think your comment about aphantasia is at least directionally correct, at least as applied to people. But there's little reason to assume LLMs would do things the same way a human mind does, though both might operate under some similar information-theoretic constraints that would cause convergent evolution.
This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.
The problem with common core math isn’t that rounding is inherently bad, it’s that you don’t start with that as a framework.
Rote memorization should be minimized in school curriculum
Memory can improve with training, and it's useful in a large number of contexts. My major beef with rote memorization in schools is that it's usually made to be excruciatingly boring. I'd say that's the bigger problem.
Nah I do similar stuff. I think very few people actually trace their own lines of thought, so they probably don’t realize this is how it often works.
Huh. I visualize a whiteboard in my head. Then I...do the math.
I'm also fairly certain I'm autistic, so... ¯\_(ツ)_/¯
I do much the same in my head.
Know what's crazy? We sling bags of mulch, dirt and rocks onto customer vehicles every day. No one, neither coworkers nor customers, will do simple multiplication. Only the most advanced workers do it. No lie.
Customer wants 30 bags of mulch. I look at the given space:
"Let's do 6 stacks of 5."
Everyone proceeds to sling shit around in random piles and count as we go. And then someone loses track and has to shift shit around to check the count.
Yeah, one of my family members is a bricklayer and he can work out a bill of materials in his head based on the dimensions in an architectural plan: given these dimensions and this thickness of mortar joint, I'll need this many bricks, this many bags of mortar, this many bags of sand, this many hours of labor, etc. It's just addition and multiplication, but his colleagues regard him as a freak. And when he first started doing it, if you'd ask him to break down his reasoning, he'd find that difficult.
Well, I guess I do a bit of the same:) I do (70+2)(10+3) -> 700+210+20+6
72 * 10 + 70 * 3 + 2 * 3
That's what I do in my head if I need an exact result. If I'm approximateing I'll probably just do something like 70 * 15 which is much easier to compute (70 * 10 + 70 * 5 = 700 + 350 = 1050).
(72 * 10) + (2 * 3) = x
There, fixed, because otherwise order of operation gets fucky.
No it doesn't, multiplication and division always take precedence over addition and subtraction. You'd need parentheses to clarify what is in the divisor since that can be ambiguous with line notation.
OK, I've been willing to just let the examples roll even though most people are just describing how they'd do the calculation, not a process of gradual approximation, which was supposed to be the point of the way the LLM does it...
...but this one got me.
Seriously, you think 70x5 is easier to compute than 70x3? Not only is that a harder one to get to for me in the notoriously unfriendly 7 times table, but it's also further away from the correct answer and past the intuitive upper limit of 1000.
Times 5 and times 10 tables are really easy for me. So yeah, in my mind it's an easier comuptation.
That being said having a result of a little over a 1000 gives me an estimate for the magnitude of a number – it's around a thousand. It might be more or less but it's not far from there.
See, for me, it’s not that 7*5 is easier to compute than 7*3, it’s that 5*7 is easier to compute than 7*3.
I saw your other comment about 8’s, too, and I’ve always found those to be a pain, so I reverse them, if not outright convert them to arithmetic problems. 8x4 is some unknown value, but X*8 is always X*10-2X, although do have most of the multiplication tables memorized for lower values.
8*7 is an unknown number that only the wisest sages can compute, however.
For me personally, anything times 5 can be reached by halving the number, then multiplying that number by 10.
Example: 66 x 5 = Y
(66/2) x (5x2) = Y
cancel out the division by creating equal multiplication in the other number
66/2 = 33
5x2 = 10
33 x 10 = Y
33 x 10 = 330
Y = 330
The 7 times table is unfriendly?
I love 7 timeses. If numbers were sentient, I think I could be friends with 7.
I've always hated it and eight. I can only remember the ones that are familiar at a glance from the reverse table and to this day I sometimes just sum up and down from those "anchor" references. They're so weird and slippery.
Huh.
Going back to the "being friends" thing, I think you and I could be friends due to applying qualities to numbers; but I think it might be challenging because I find 7 and 8 to be two of the best. They're quirky, but interesting.
Thank you for the insight.
I would do 720 + 3 * 70 + 3 * 2
Thanks
🙏
Thanks for copypasting here. I wonder if the "prediction" is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.
Isn't that the "new math" everyone was talking about?
This reminds me of learning a shortcut in math class but also knowing that the lesson didn't cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal's Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM's training makes it easier to use heuristics. That's actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
You're antropomorphising quite a bit there. It is not trying to be deceptive, it's building two mostly unrelated pieces of text and deciding the fuzzy logic is getting it the most likely valid response once and that the description of the algorithm is the most likely response to the other. As far as I can tell there's neither a reward for lying about the process nor any awareness of what the process was anywhere in this.
Still interesting (but unsurprising) that it's not getting there by doing actual maths, though.
Maybe you're right. Maybe it's Markov chains all the way down.
The only way I can think to test this would be to "poison" the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.
Well, that's supposed to be the point of the paper in the first place. They seem to be tracing paths through the neural net and seeing what lights up when they do things step by step. Someone posted a link to the source article somewhere in this thread.
Best they can tell, as per the article, they say the math answer and the answer to how it got to the answer are being generated independently.