this post was submitted on 18 Aug 2025
1128 points (99.0% liked)

Technology

74247 readers
4816 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dwzap@lemmy.world 24 points 1 day ago (1 children)

The Wikimedia Foundation does just that, and still, their infrastructure is under stress because of AI scrapers.

Dumps or no dumps, these AI companies don't care. They feel like they're entitled to taking or stealing what they want.

[–] interdimensionalmeme@lemmy.ml 6 points 1 day ago* (last edited 1 day ago)

That's crazy, it makes no sense, it takes as much bandwidth and processing power on the scraper side to process and use the data as it takes to serve it.

They also have an open API that makes scraper entirely unnecessary too.

Here are the relevant quotes from the article you posted

"Scraping has become so prominent that our outgoing bandwidth has increased by 50% in 2024."

"At least 65% of our most expensive requests (the ones that we can’t serve from our caching servers and which are served from the main databases instead) are performed by bots."

"Over the past year, we saw a significant increase in the amount of scraper traffic, and also of related site-stability incidents: Site Reliability Engineers have had to enforce on a case-by-case basis rate limiting or banning of crawlers repeatedly to protect our infrastructure."

And it's wikipedia ! The entire data set is trained INTO the models already, it's not like encyclopedic facts change that often to begin with !

The only thing I imagine is that it is part of a larger ecosystem issue, there the rare case where a dump and API access is so rare, and so untrust worthy that the scrapers are just using scrape for everything, rather than taking the time to save bandwidth by relying on dumps.

Maybe it's consequences from the 2023 API wars, where it was made clear that data repositories would be leveraging their place as pool of knowledge to extract rent from search and AI and places like wikipedia and other wikis and forums are getting hammered as a result of this war.

If the internet wasn't becoming a warzone, there really wouldn't be a need for more than one scraper to scrape a site, even if the site was hostile, like facebook, it only need to be scraped once and then the data could be shared over a torrent swarm efficiently.