this post was submitted on 05 Aug 2025
812 points (99.6% liked)
Technology
73758 readers
5767 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So Wikipedia has three methods for deleting an article:
This new criterion has nothing to do with preempting the kind of trust building you described. The editor who made it will not be treated any differently than without this criterion. It's there so editors don't have to deal with the bullshit asymmetry principle and comb through everything to make sure it's verifiable. Sometimes editors will make these LLM-generated articles because they think they're helping but don't know how to do it themselves, sometimes it's for some bizarre agenda (e.g. there's a sockpuppet editor who's been occasionally popping up trying to push articles generated by an LLM about the Afghan–Mughal Wars), but whatever the reason, it just does nothing but waste other editors' time and can be effectively considered unverified. All this criterion does is expedite the process of purging their bullshit.
I'd argue meticulously building trust to push an agenda isn't a prevalent problem on Wikipedia, but that's a very different discussion.
Thank you for your answer, I really feel happy that Wikipedia is safe then. Stuff happening nowadays makes me always think of the worst.
Do you think your problem is similar to open-source developers fighting AI pull requests? There it was theorised that some people try to train their models by making them submit code changes and abuse the maintainers' time and effort to get training data.
Is it possible that this is an effort to steal work from Wikipedia editors to get you to train their AI models?
I can't definitively say "no", but I've seen no evidence of this at all.