this post was submitted on 10 Jun 2025
74 points (94.0% liked)

Programming

20902 readers
154 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] HaraldvonBlauzahn@feddit.org 0 points 4 days ago* (last edited 3 days ago) (1 children)

Ah still rolling out the old "stochastic parrot" nonsense I see.

It is a bunch of stochastic parrots. It just happens frequently that the words they are parroting were orginally written by a bunch of intelligent people which were knowledgeable in their fields.

Note this doesn't makes the parrots intelligent - in the same way that a book written by Einstein to explain special relativity has any own intelligence. Einstein was intelligent, his words transport his intelligent ideas, but the book conveying them to other people (as, the printed pages with cardboard cover) is as dumb as a stone. You would not ask a piece of cardboard so solve a math problem, would you?

Your comment doesn't account for the fact that LLMs can generalise. Often not very well but they can produce outputs for inputs not seen in their training sets. Otherwise what would be the point?

You would not ask a piece of cardboard so solve a math problem, would you?

Uhhh you know LLMs can solve quite complex maths problems? Including novel ones.