783
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 29 Jul 2023
783 points (99.0% liked)
Programming
17314 readers
140 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 1 year ago
MODERATORS
Quick correction: website scraping and ad blocking is not unlawful. It both is a means to make the web more accessible and the latter also reduces CO2 emission through reducing electricity usage from irrelevant ads. The same case could be made for web scraping as a user can make their own feed of news without having to sift through hundreds of pages. This as well can be done in a way that does not disrupt the pages‘ normal function.
That is where the two larger issues come in:
The „pay for information“ is largely a phylosophical problem. It is no problem to pay for someones book or online course but the blanket statement that one has to pay for it is false. As an open source developer I give my work freely to others and in turn receive theirs freely as well (if they use the appropriate license of course).
We really have two sides forming. The „open internet“ crowd that works together for free or maybe accepts donations and the proprietary crowd which is having a huge influence right now.
Google putting in web DRM will cement that situation and make it possible that you can only use vanilla stuff on your browser and ultimately even shutting down any access to open source things completely by making it impossible to run on ubuntu since google will only accept windows clients (this is a possible outcome, not a guaranteed one).
All in all, we are unable to perfectly anticipate the outcome of this but if we see great harming potential, it is fair to weigh it agains the potential benefits (which is the lofty goal of weeding out bots and scammers). I think the cost benefit relation is heavily tilted here.
TL;DR: Tinkering with your browser is not illegal and should be allowed to continue. The cost of (potentially) weeding out bots and scammers is not worth potentially ruining the open source community.
Doesn't Google scrape websites? Isn't that the entire purpose of google.com? If that were illegal, then Google would be the biggest offender. The author should probably look where he's pointing his gun before firing it.
It sounds like the author is getting their points mixed up. It is not unlawful to scrape websites but google kind of makes it look that way which is inherently bad. There’s no two ways about this. Google needs to step back from this.