this post was submitted on 08 Jun 2025
280 points (93.8% liked)

Fuck AI

3065 readers
928 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BlameThePeacock@lemmy.ca 5 points 2 days ago (5 children)
[–] halcyoncmdr@lemmy.world 32 points 2 days ago (1 children)

[Citation needed]

If anything the LLMs have gotten less useful and started hallucinating even more obviously now.

[–] NostraDavid@programming.dev 8 points 2 days ago

7 months ago: https://web.archive.org/web/20241210232635/https://openlm.ai/chatbot-arena/ Now: https://web.archive.org/web/20250602092229/https://openlm.ai/chatbot-arena/

You can see that o1-mini, a silver (almost gold) model, is now a middle-of-the-road copper model.

Note that Chatbot Arena calculates its score relatively - they'll show two outputs (without the model names), and people select the output they prefer. The preferences are ordered. Not sure what accounts for gold/silver/copper.

[–] PixelatedSaturn@lemmy.world -5 points 2 days ago (2 children)

Yes. 7 months ago there weren't any reasoning models. The video models were far worse. Coding was nothing compared to capabilities they have now.

Ai has come far fast from the time this article was written.

[–] MrSmith@lemmy.world 1 points 15 hours ago (1 children)

There aren't any reasoning models now. LLMs cannot reason (and the whole "resoning" BS has just been busted by Apple), just like can't orgasm, no matter what daddy Sam tells you.

[–] PixelatedSaturn@lemmy.world 0 points 14 hours ago (1 children)

I think you should try to be even more profane, good rhetorical strategy, well done.

[–] MrSmith@lemmy.world 1 points 14 hours ago (1 children)

Remember kids: when your brain fails to construct an argument - just tone-police!

[–] PixelatedSaturn@lemmy.world 1 points 14 hours ago (1 children)

Says the guy whose argument is an insult. 🤌

[–] MrSmith@lemmy.world 1 points 13 hours ago

Which part was an insult? Don't use an LLM to answer this one, it's not working out well for your reading comprehension.

[–] Voroxpete@sh.itjust.works 20 points 2 days ago (2 children)

Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn't get away with pretending it was a serious leap forward. "Reasoning" sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don't match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every "hope for the future" has fizzled utterly.

Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you're getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a "hyperscaling" technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.

The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We're nowhere near close to that.

The crash is coming, not because LLMs cannot ever be improved, but because it's becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.

[–] queermunist@lemmy.ml 7 points 2 days ago* (last edited 2 days ago) (2 children)

DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that's how I understand it)

So I'm not going to assume LLMs will hit a wall, but it's going to require something else paradigm shifting that we just aren't seeing out of the current crop of developers.

[–] Voroxpete@sh.itjust.works 6 points 2 days ago (1 children)

Yes, but the basic problem doesn't change; you're spending billions to make millions. And Deepseek's approach only works because they're able to essentially distill the output of less efficient models like Llama and GPT. So they haven't actually solved the underlying technical issues, they've just found a way to break into the industry as a smaller player.

At the end of the day, the problem is not that you can't ever make something useful with transformer models; it's that you cannot make that useful thing in a way that is cost effective. That's especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that's worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.

[–] jumping_redditor@sh.itjust.works 1 points 2 days ago (1 children)

ai has large initial cost, but older models will continue to exist and the open source models will continue to take potential profit from the corps

[–] Voroxpete@sh.itjust.works 1 points 1 day ago* (last edited 1 day ago)

It does have a large initial cost. It also has a large ongoing cost. GPU time is really, really pricey.

Even putting aside training and infrastructure, OpenAI still loses money on even their most expensive paid subscribers. While guys like Deepseek have shown ways of reducing those costs, they're still not enough to make these models profitable to run at the kind of workloads they're intended to handle, and attempts to reduce their fallibility make them even more expensive, because they basically just involve running the model multiple times over.

[–] skulblaka@sh.itjust.works 7 points 2 days ago

That was pretty much always the only potential path forward for LLM type AIs. It's an extension of the same machine learning technology we've been building up since the 50s.

Everyone trying to approximate an AGI with it has been wasting their time and money.