LLMs can’t think - only generate statistically plausible patterns
Ah still rolling out the old "stochastic parrot" nonsense I see.
Anyway on to the actual article... I was hoping it wouldn't make these basic mistakes:
[Typescript] looks more like an “enterprise” programming language for large institutions, but we honestly don’t have any evidence that it’s genuinely more suitable for those circumstances than the regular JavaScript.
Yes we do. Frankly if you've used it it's so obviously better than regular JavaScript you probably don't need more evidence (it's like looking for "evidence" that film stars are more attractive than average people). But anyway we do have great papers like this one.
Anyway that's slightly beside the point. I think the article is right that smart people are not invulnerable to manipulation or falling for "obviously" stupid ideas. I know plenty of very smart religious people for example.
However I think using this to dismiss LLMs is dumb, in the same way that his dismissal of Typescript is. LLMs aren't homeopathy or religion.
I have used LLMs to get some work done and... guess what, it did the work! Do I trust it to do everything? Obviously not. But sometimes I don't need perfect code. For example recently I asked it to create an example SystemVerilog file for me utilising as many syntax features as possible (testing an auto-formatter). It did a pretty good job. Saved some time. What psychological hazard have I fallen for exactly?
Overall, B-. Interesting ideas but flawed logic.