this post was submitted on 08 Aug 2025
413 points (99.3% liked)

Fediverse

21213 readers
21 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

Dropsitenews published a list of websites Facebook uses to train its AI on. Multiple Lemmy instances are on the list as noticed by user BlueAEther

Hexbear is on there too. Also Facebook is very interested in people uploading their massive dongs to lemmynsfw.

Full article here.

Link to the full leaked list download: Meta leaked list pdf

you are viewing a single comment's thread
view the rest of the comments
[–] rimu@piefed.social 20 points 1 week ago (3 children)

Check out the robots.txt on any Lemmy instance....

[–] usernamesAreTricky@lemmy.ml 33 points 1 week ago (1 children)

Linked article in the body suggests that likely wouldn't have made a difference anyway

The scrapers ignored common web protocols that site owners use to block automated scraping, including “robots.txt” which is a text file placed on websites aimed at preventing the indexing of context

[–] mesamunefire@piefed.social 25 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah ive seen the argument in blog posts that since they are not search engines they dont need to respect robots.txt. Its really stupid.

[–] AmbitiousProcess@piefed.social 19 points 1 week ago

"No no guys you don't understand, robots.txt actually means just search engines, it totally doesn't imply all automated systems!!!"

[–] belated_frog_pants@beehaw.org 4 points 1 week ago (1 children)
[–] rimu@piefed.social 4 points 1 week ago (1 children)

Thieves can smash a window to get into my house but I still lock my doors.

[–] belated_frog_pants@beehaw.org 1 points 1 week ago

This is more like being there when they come to steal and you ask them to ignore some rooms please.

[–] Pamasich@kbin.earth 3 points 1 week ago

If they have a brain, and they do have the experience from Threads, they don't need to scrape Lemmy. They can just set up a shell instance, subscribe to Lemmy communities, and then use federation to get their data for free. That doesn't use robots.txt at all regardless.