12
Deleted
(lemmy.ml)
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
Looking for support?
Looking for a community?
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.
A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.
Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It's just way too much work for very little effort.
It's like I can enter my house through the door or the chimney. I would always take the door since it's designed for human entry. I could technically use the chimney if there's no door. But if someone lights up the fireplace I'd be toast.
Nothing but effort. Nobody wants to constantly baby a project just because someone else may change their code at a moment's notice. Why would you want to comb through someone else's html + obfuscated JavaScript to figure out how to grab some dynamically shown data when there was a well documented publicly available API?
Also NewPipe breaks all the time. APIs are generally stable, and can last years if not decades without changing at all. Meanwhile NewPipe parsing breaks every few weeks to months, requiring programmer intervention. Just check the project issue tracker and you'll see it's constantly being fixed to match YouTube changes.