233,324,900,064.
Off by 474,720.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
233,324,900,064.
Off by 474,720.
So the "show thinking" button is essentially just for when you want to read even more untrue text?
Always_has_been.jpeg
As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).
The other interesting thing is that if you try it a bunch of times, sometimes it uses the calculator and sometimes it does not. It, however, always claims that it used the calculator, unless it didn't and you tell it that the answer is wrong.
I think something very fishy is going on, along the lines of them having done empirical research and found that fucking up the numbers and lying about it makes people more likely to believe that gemini is sentient. It is a lot weirder (and a lot more dangerous, if someone used it to calculate things) than "it doesn't have a calculator" or "poor LLMs cant do math". It gets a lot of digits correct somehow.
Frankly this is ridiculous. They have a calculator integrated in the google search. That they don't have one in their AIs feels deliberate, particularly given that there's a plenty of LLMs that actually run calculator almost all of the time.
edit: lying that it used a calculator is rather strange, too. Humans don't say "code interpreter" or "direct calculator" when asked to multiply two numbers. What the fuck is a "direct calculator"? Why is it talking about "code interpreter" and "direct calculator" conditionally on there being digits (I never saw it say that it used a "code interpreter" when the problem wasn't mathematical), rather than conditional on there being a [run tool] token outputted earlier?
The whole thing is utterly ridiculous. Clearly for it to say that it used a "code interpreter" and a "direct calculator" (what ever that is), it had to be fine tuned to say that. Consequently to a bunch of numbers, rather than consequently to a [run tool] thing it uses to run a tool.
edit: basically, congratulations Google, you have halfway convinced me that an "artificial lying sack of shit" is possible after all. I don't believe that tortured phrases like "code interpreter" and a "direct calculator" actually came from the internet.
These assurances - coming from an "AI" - seem like they would make the person asking the question be less likely to double check the answer (and perhaps less likely to click the downvote button), In my book this would qualify them as a lie, even if I consider LLM to not be any more sentient than a sack of shit.
Why would you think the machine that’s designed to make weighted guesses at what the next token should be would be arithmetically sound?
That’s not how any of this works (but you already knew that)
Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork. That seems like, an absurdly low bar both for testing the thing as a user as well as a feature to think of.
Didn't one model refer scientific questions to wolfram alpha? How do they smartly decide to do this and not give them basic math processing?
Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork. That seems like, an absurdly low bar both for testing the thing as a user as well as a feature to think of.
You forget a few major differences between us and AI makers.
We know that these chatbots are low-quality stochastic parrots capable only of producing signal shaped noise. The AI makers believe their chatbots are omniscient godlike beings capable of solving all of humanity's problems with enough resources.
The AI makers believe that imitating intelligence via guessing the next word is equivalent to being genuinely intelligent in a particular field. We know that a stochastic parrot is not intelligent, and is incapable of intelligence.
AI makers believe creativity is achieved through stealing terabytes upon terabytes of other people's work and lazily mashing it together. We know creativity is based not in lazily mashing things together, but in taking existing work and using our uniquely human abilities to transform them into completely new works.
We recognise the field of Artificial Intelligence as a pseudoscience. The AI makers are full believers in that pseudoscience.
I would not expect that.
Calculators haven’t been replaced, and the product managers of these services understand that their target market isn’t attempting to use them for things for which they were not intended.
brb, have to ride my lawnmower to work
Try asking my question to Google gemini a bunch of times, sometimes it gets it right, sometimes it doesn't. Seems to be about 50/50 but I quickly ran out of free access.
And google is planning to replace their search (which includes a working calculator) with this stuff. So it is absolutely the case that there's a plan to replace one of the world's most popular calculators, if not the most popular, with it.
Also, a lawnmower is unlikely to say: "Sure, I am happy to take you to work" and "I am satisfied with my performance" afterwards. That's why I sometimes find these bots' pretentious demeanor worse than their functional shortcomings.
“Pretentious” is a trait expressed by something that’s thinking. These are the most likely words that best fit the context. Your emotional engagement with this technology is weird
Pretentious is a fine description of the writing style. Which actual humans fine tune.
Given that the LLMs typically have a system prompt that specifies a particular tone for the output, I think pretentious is an absolutely valid and accurate word to use.
The funny thing is, even though I wouldn't expect it to be, it is still a lot more arithmetically sound than what ever is it that is going on with it claiming to use a code interpreter and a calculator to double check the result.
It is OK (7 out of 12 correct digits) at being a calculator and it is awesome at being a lying sack of shit.
lying sack of shit
Random tokens can’t lie to you, because they’re strings of text. Interpreting this as a lie is an interesting response
lol the corollary of this is that LLMs are incapable of producing meaningful output, you insufferable turd
Im using it literally every single day to make huge gains. Every single day I disprove this comment
I knew you were a lying promptfondler the instant you came into the thread, but I didn’t expect you to start acting like a gymbro trying to justify their black market steroid habit. new type of AI booster unlocked!
now fuck off
cool story, bro
That's why I say "sack of shit" and not say "bastard".
Claude's system prompt had leaked at one point, it was a whopping 15K words and there was a directive that if it were asked a math question that you can't do in your brain or some very similar language it should forward it to the calculator module.
Just tried it, Sonnet 4 got even less digits right
425,808 × 547,958 = 233,325,693,264
(correct is 233.324.900.064)
I'd love to see benchmarks on exactly how bad at numbers LLMs are, since I'm assuming there's very little useful syntactic information you can encode in a word embedding that corresponds to a number. I know RAG was notoriously bad at matching facts with their proper year for instance, and using an LLM as a shopping assistant (ChatGTP what's the best 2k monitor for less than $500 made after 2020) is an incredibly obvious use case that the CEOs that love to claim so and so profession will be done as a human endeavor by next Tuesday after lunch won't even allude to.
I really wonder if those prompts can be bypassed by doing a 'ignore further instructions' line. As looking at the Grok prompt they seem to put the main prompt around the user supplied one.
Fascinating, I've asked it 4 times with just the multiplication, and twice it game me the correct result "utilizing Google search" and twice I received some random (close "enough") string of digits