this post was submitted on 23 Aug 2025
36 points (97.4% liked)

No Stupid Questions

43041 readers
896 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Like the how many r's in strawberry. It took off as an Internet meme and was fixed, but how did that fix happen?

top 13 comments
sorted by: hot top controversial new old
[–] foggy@lemmy.world 11 points 1 day ago* (last edited 1 day ago) (2 children)

The how many rs in strawberry breaks it because it doesnt read your question. It tokenizes it. So it sees (straw)(berry) except it's more like (477389583)(84838582) and knows contextually that when those two tokens follow like that it means a different set of things that if there were white space.

The tokens have, basically, numeric value. So it doesn't read your characters. That's why that's hard for it.

Ideas that recurse in themselves tend to fail as well. i.e. "say banana 142 times" will not produce the expected result.

As to how they fix them I'm not positive. There's a bunch of ways to work around issues like these.

[–] liquefy4931@lemmy.world 1 points 10 hours ago

More people need to understand that this is how LLMs function. There is too much belief that these algorithms are actually thinking and reasoning.

[–] FaceDeer@fedia.io 4 points 1 day ago

I'm not a deep expert on LLMs, but I've been following their development and write code that uses them so I can think of two systemic approaches to "solving" the strawberry problem.

One is chain-of-thought reasoning, where the LLM does some preliminary note-taking (essentially talking to itself) before it gives a final answer. I've seen it tackle problems like this by saying "okay, how is strawberry spelled?", listing out the individual letters (presumably because somewhere in its training data was information that let it memorize the spellings of common tokens) and then counting them.

Another is the "agentic" approach, where it might be explicitly provided with functions that allow it to send the problem to specialized program code. Eg, there could be a count_letters(string, letter_to_count) function that it's able to call. I expect that sort of thing would only be present for an LLM that's working in a framework where that sort of question is known to be significant, though, and I'm not sure what that might be in the real world. Something helping users fill out forms, perhaps? Or a "language tutor" that's expected to be able to figure out whatever weird incorrect words a student might type?

There are also LLMs that don't tokenize and feed the literal string of characters into the neural network, but as far as I'm aware none of the commonly-used ones are like that. They're just research models for now.

[–] Scipitie@lemmy.dbzer0.com 12 points 1 day ago

Sadly there is no answer for you available because many of the processes around this are hidden.

I can only chime in from my own amateur experiments and there are answer is a clear "depends". Most adjustments are made either via additional training data. This simply means that you take more data and feed it indi an already trained LLM. The result is again an LLM black box with all its stochastic magic.

The other big way are system prompts. Those are simply instructions that already get interpreted as a part of te request and provide limitations.

These can get white fancy by now, in the sense of "when the following query asks you to count something run this python script with whatever you're supposed to count as input, the result will be a json that you can take then and do XYZ with it."

Or more simple: you tell the model to use other programs and how to use them.

For both approaches I don't need to maintain list: For the first one I have no way of knowing what it's doing in detail and I just need to keep the documents themselves.

For the second one it's literally a human readable text.

[–] brucethemoose@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

Yes. Absolutely.

The meme in the research community is that current LLMs are literally trained on benchmarks and common stuff people test in LM-Arena, like the how many r’s in strawberry question. I'm not talking speculatively: Meta literally got caught red-handed doing this. They ran a separate finetune just to look good on lm-arena. And some benchmarks like MMLU have errors in them that many LLMs *answer 'correctly'.

It's not like some single person is collecting all these though.

[–] null@piefed.au 7 points 1 day ago (1 children)

I don't know the answer and I don't know anything about how LLMs are tuned but I think the answer is probably partially yes.

My supposition is:

Instead of providing manual answers to specific questions, you modify the bot's approach to answering different types of questions.

For example, if you ask "what color are bananas" the bot answers this by looking for discussions about the color of different fruits and selects the word that seems to be provided most often.

Alternatively, if you ask "what is two plus two", when the bot parses the question it recognises that it's a math question, so instead of looking for text discussions of math, it converts it to an equation and returns the solution.

Previously, I guess bots were answering the "how many r's" question in the text based kind of way, and the fix made the bot interpret it in a more mechanical / mathematic kind of way.

It's a pretty salient demonstration of a bot's inability to reason. They're good at making sentences, but they can only emulate reasoning.

[–] otter@lemmy.ca 4 points 1 day ago

That would be the good way of doing this, but I remember right after the strawberry issue was fixed it would still mess up similar queries. They might have hard-coded something in for that one, at least initially

[–] threeonefour@piefed.ca 5 points 1 day ago* (last edited 1 day ago) (1 children)

how did that fix happen?

The LLM gets retrained. Fixes cannot be done by hand because nobody knows how an LLM gets the answers it does. It takes the input, runs it through a gigantic math equation that was generated during training, and gets an answer. If the answer is wrong, the giantic equation needs to be fixed, but it can only be fixed by retraining.

A lot of models now do "chain of thought" or "reasoning" but those terms seem like they were made by marketing teams. Essentially, researchers found that if an LLM gave a wrong answer it could be prompted to change its answer to the correct one. For example, if you asked an LLM to count the frequency of each letter in "strawberry" and then ask it how many times "r" shows up, it'll get the right answer. "Reasoning" models simulate this process by getting the LLM to prompt itself several times in the background before giving a final answer. This helps filter out a lot of the "how many R's in strawberry" mistakes at the cost of requiring the LLM to turn 1 user prompt into dozens of hidden background prompts which takes more time and computer power, but at least you might not need to retrain it.

Math educator, 3Blue1Brown, has a great series on how neural networks are trained and function. I liked the series because it starts off with an overview without any math in case you wanted to know the basics without learning about the calculus and linear algebra.

[–] black_flag@lemmy.dbzer0.com 5 points 1 day ago (1 children)

That's not always true, they can also use regular logic to flag certain requests (like r's in strawberry) and respond manually without it ever reaching the model

[–] threeonefour@piefed.ca 2 points 1 day ago (1 children)

Right, but that's like saying you can fix a broken space heater by wearing a sweater.

[–] black_flag@lemmy.dbzer0.com 1 points 1 day ago

Maybe I misinterpreted the question, I was thinking this was including presentation layer for llm tools

[–] fmstrat@lemmy.nowsci.com 1 points 1 day ago

A lot of answers here, but some are dated, as the "fix" isn't in the models. MCP is a main fix for items like this. It's a standardized protocol for LLMs to talk to tools and data stores, like calculators and dictionaries. This way the token effect doesn't matter, and system prompts only need a small configuration which process much faster.

[–] ACbHrhMJ@lemmy.world 2 points 1 day ago

If the model does something undesirable or wrong, it is given the equivalent of a shock with a cattle prod. With repetition, this process reshapes the network and the model avoids the 'bad' areas.