this post was submitted on 02 Sep 2025
62 points (93.1% liked)

Fuck AI

4024 readers
474 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
all 17 comments
sorted by: hot top controversial new old
[–] ALoafOfBread@lemmy.ml 56 points 1 week ago* (last edited 1 week ago) (3 children)

Article is talking about GPT-5 supposedly being able to write in a literary style, but actually generating nonsense. "GPT-5 has been optimized to produce text that other LLMs will evaluate highly, not text that humans would find coherent"

Looks like it was trained to write prose that other LLMs find acceptable, not what humans would evaluate as being good.

[–] floquant@lemmy.dbzer0.com 27 points 1 week ago

Dead everything theory

[–] unexposedhazard@discuss.tchncs.de 20 points 1 week ago (1 children)

AI companies love validating their tools with AI so this is no surprise. Everything is a loop with these people. A poop loop.

[–] CitizenKong@lemmy.world 6 points 1 week ago

A dopey poop loop.

[–] Eric_Pollock@lemmy.dbzer0.com 10 points 1 week ago

Thanks for the summary, clickbait headline made me not even want to click

[–] ZILtoid1991@lemmy.world 25 points 1 week ago (1 children)

Due to the nature of the algorithm, LLMs love to jam adjectives before as much nouns as possible, and somehow it started to be even more prominent. Since there's a good chance AI is being trained on AI generated text, I think it's the result of feedback. You could call it the sepia filter of text generators, let's hope it'll create a model collapse.

[–] kahdbrixk@feddit.org 7 points 1 week ago

Training LLM Wirth LLM. What could ever go wrong. Vibe coding the vibe code generator. All for the sake of being the best and the fastest. Skynet here we come. But like the chaotic degenerated version, that has no reason for killing everything.

[–] Treczoks@lemmy.world 20 points 1 week ago (1 children)

The problem that Claude rates ChatGPT slop as "literature" lies in the fact that Claude is also an AI with AI issues.

[–] hendrik@palaver.p3x.de 17 points 1 week ago* (last edited 1 week ago)

That's not bizarre at all. It's a direct effect of these things being optimized by having another AI judge the output and then it gets tuned so it scores well.

[–] TropicalDingdong@lemmy.world 14 points 1 week ago (2 children)

The more i see these issues the more I think the problem is with gradient descent.

It's like...

Imagine you have a machine draped in a sheet. Machine learning, for all the bells and whistles about attention blocks, and convolutional layers, it's doing gradient decent and still playing " better or worse. But fundamentally it's not building it's understanding of the world from "below". It's not taking blocks or fundamentals and combining them. It's going the other way about it. It's takes a large field and tries to build an approximation that captures the fold whatever under the sheet is creating: but it has not one clue what lies under the sheet or why some particular configuration should result in such folds.

there was a really interesting critique, I forget where , a few weeks ago on this matter. Also, the half glass of wine issue further highlights the matter. You can appear mache over the problem but you'll not over come it down this alley we've taken.

[–] 01189998819991197253@infosec.pub 2 points 1 week ago* (last edited 1 week ago)

Thanks for the ~~icon~~ avatar. Now that song is, once again, stuck in my head lol

[–] squaresinger@lemmy.world -4 points 1 week ago (1 children)

Depends. Pure LLM, sure, you are right. LLMs are a terrible way to "store" information.

Coupling LLMs with a decent data source on the other hand isn't such a terrible idea. E.g. answer the question with a google search summarized by LLM can work.

The bigger issue here is (a) when it doesn't seach but does everything locally and (b) that now the site owners lose traffic without compensation.

[–] Little8Lost@lemmy.world 5 points 1 week ago

or (c) if scammers can manipulate which phone numbers get displayed in the summary
https://www.zdnet.com/article/scammers-have-infiltrated-googles-ai-responses-how-to-spot-them/

[–] crumbguzzler5000@feddit.org 10 points 1 week ago

It was never great and with each generation it's getting so much more hit and miss. I'd rather just write using my own words, whilst my vocabulary isn't astounding, at least it sounds like I wrote it and I know it makes sense.

As for coding, I've personally found that a good chunk of the time, the code it spits out looks great but is often not functional without tweaking.

My work has a GPT which they trained up with a load of our code base. It outputs great looking stuff but damn does it make a lot of it up.

[–] Etterra@discuss.online 6 points 1 week ago

This is what happens when Purple Prose and Word Salad fuck and have a baby.