this post was submitted on 21 Aug 2025
1102 points (96.9% liked)
Microblog Memes
9017 readers
2666 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.
Of course LLMs make stuff up, they are machines that make stuff up.
Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn't make someone infallible or even smart. People should question everything they read, regardless of the source.
Except we give it the glorifying title "AI". It's supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don't know what point you're arguing. I didn't call it AI and even if I did, I don't know any definition of AI that includes infallibility. I didn't claim it's better than a search engine, either. Even if I did, "Better" does not equal "Always correct."