Yup. The robots.txt file is not only meant to block robots from accessing the site, it's also meant to block bots from accessing resources that are not interesting for human readers, even indirectly.
For example, MediaWiki installations are pretty clever in that by default, /w/
is blocked and /wiki/
is encouraged. Because nobody wants technical pages and wiki histories in search results, they only want the current versions of the pages.
Fun tidbit: in the late 1990s, there was a real epidemic of spammers scraping the web pages for email addresses. Some people developed wpoison.cgi
, a script whose sole purpose was to generate garbage web pages with bogus email addresses. Real search engines ignored these, thanks to robots.txt. Guess what the spam bots did?
Do the AI bros really want to go there? Are they asking for model collapse?
I watch a lot of "lost media" discussion channels.
There's been a lot of lost media searches where the people looking for the thing suddenly found a crucial hint when someone who worked on the project posted a 2.5 second clip of the thing in question in a video cv / showreel.
Expect a lot of that in the future. Except about media that probably didn't even get released at all in the first place.