this post was submitted on 21 Aug 2025
1102 points (96.9% liked)
Microblog Memes
9017 readers
2666 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think that is it's biggest limitation.
Like AI basically crowd sourcing information isn't really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it's an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.
Ideally it would be more selective about the 'crowds' it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.
Like Wikipedia (at least for now) is 'crowd- sourced', but individual pages are usually maintained by people who know a lot about the subject. That's why it's more accurate than a 'normal' encyclopedia. Though of course it's not fool proof or tamper proof by any definition.
If we taught AI how to be 'Media Literate' and gave it the ability to double check it's data with reliable sources- it would be a lot more useful.
This is the other problem. You basically have 4 types of redditors.
People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is 'wrong' or you disagree with it, if it's something that adds to the discussion, you are supposed to upvote it.
People who treat it as "I agree/ I disagree" buttons.
People who treat it as "I like this/ I hate this buttons.
Id say the majority of the people probably do some combination of the above.
So more than half the time people aren't upvoting things because they think they are correct. If LLM models are treating 'karma' as a "This is correct" metric- that's a big problem.
The other bad problem is people who really should know better- tech bros and CEO's going all in on AI when it's WAY to early to do that. As you point out, it's not even really intelligent yet- it just parrots 'common' knowledge.
AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.