this post was submitted on 26 Aug 2025
70 points (98.6% liked)
Tech
1836 readers
229 users here now
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Programming isn't about syntax or language.
LLMs can't do problem solving.
Once a problem has been solved, the syntax and language is easy.
But reasoning about the problem is the hard part.
Like the classic case of "how many 'r's in 'strawberry'", LLMs would state 2 occurrences.
Just check googles AI Mode.
The strawberry problem was found and reported on, and has been specifically solved.
Promoted
how many 'r's in the word 'strawberry'
:Prompted
how many 'c's in the word 'occurrence'
:So, the specific case has been solved. But not the problem.
In fact, I could slightly alter my prompt and get either 2 or 3 as the answer.
None of this contradicts anything in my post.
Edit - but I will add that the ai agent is written to manage the limitations of the LLM. To do the kind of 'thinking' (they don't really think) the LLM can't do, in a very loose sense (to try and briefly address the point in your post).