this post was submitted on 10 Jun 2025
74 points (94.0% liked)

Programming

20830 readers
259 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

you are viewing a single comment's thread
view the rest of the comments
[–] HaraldvonBlauzahn@feddit.org 6 points 2 days ago* (last edited 2 days ago)

Reponding to another comment in opensource@lemmy.ml:

Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can't think logically and certainly don't do math even if the result sometimes happens to be correct.

For the dynamic vs static typing debate, see the article by Dan Luu:

https://danluu.com/empirical-pl/

But this is not the central point of the above blog post. The central point of it is that, by the very nature of LLMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-experimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

And the quibbling about what "thinking" means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become "but it seems to be thinking to me" even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they 'know' and don't 'know', or can not optimize decisions for multiple complex and sometimes contradictory objectives (which is absolutely critical to any sane software architecture).

What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.