this post was submitted on 12 Oct 2025
105 points (97.3% liked)
Fuck AI
4292 readers
1505 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
TBH I don't have much experience with it, because of the myriad other issues that plague LLMs, but style and tone is generally considered the thing that they're good at.
I have some experience on the letter receiving side to share. I have a work colleague who recently decided it was a good idea to answer inquiries in MS Teams or email with LLM generated text. It was very obvious because the wording was too business-polished polite, was too verbose and did not sound like anything you would answer to a colleague ever. While the content was technically fine, the tone was missed by a mile. Also the generous use of the infamous em dash and unnecessary exclamation marks gave it away immediately.
That poses a problem. If you do that to a person you're working with and they immediately know you're serving them AI slop because you're too lazy to be bothered with basic human interaction they WILL be offended. Same goes for customers if they know you personally or expect a human on the other side.
Humans are getting better at identifying AI garbage faster than LLMs improve. Because humans are still excellent at intuitive pattern recognition. Noticing that something is off intuitively is an evolutionary advantage that might save our ass.
If you want to sound like a mewling quim they are perfect!
I almost did not believe the words mewling and quim existed in real life language and had to look it up to ensure you didn't write that comment with an LLM AI
Style and tone MIGHT be something they can mimic, but they are phenomenally bad at nuance. The LLM model loses information when it is constructed, and it similarly loses detail when it's asked to elaborate on a point.