this post was submitted on 15 Aug 2025
50 points (87.9% liked)
Programming
22204 readers
151 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not against AI use in software development... But you need to understand what the tools you use actually do.
An LLM is not a dev. It doesn't have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about.
An LLM is a predictive tool. So use it as a predictive tool.
The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains' full line completion AI. It very often accurately predicts what I want to write when it's boilerplate-ish, and shuts up when I write something original.
Yes they do have the abikity to think and reason just like you (generally mush faster and slightly better)
https://medium.com/@leucopsis/how-gpt-5-compares-to-claude-opus-4-1-fd10af78ef90
96% on the AIME with zero tools. Only reading the question and reasoning through the answer
https://www.datacamp.com/blog/gpt-5
Absolutely not. This comment shows you have absolutely zero idea how an LLM works.
This is not true. They do not think or reason. They have code that appears to reason, but it definitely is not.
Once it gets off track it doesn't consider that it is obviously wrong.
A simple math problem can fail and it is really obvious to a human for example.
No, they can't think and reason. However, they can replicate and integrate the thinking and reasoning of many people who have written about similar problems. And yes, they can do it must faster than we could read a hundred search result pages. And yes, their output looks slightly better than many of us in many cases, because they are often dispensing best practices by duplicating the writings of experts. (In the best cases, that is.)
https://arxiv.org/pdf/2508.01191
https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/