257
Search engines down? (discuss.tchncs.de)
submitted 5 months ago* (last edited 5 months ago) by rclkrtrzckr@discuss.tchncs.de to c/asklemmy@lemmy.ml

Is it just me or are many independent search engines down? Duckduckgo, my go to engine, qwant, ecosia, startpage... All down? The only hint I got was on the qwant page...

Edit: it all seems to be related to bing being down. I hope the independent engines will find a way to get really independent...

you are viewing a single comment's thread
view the rest of the comments
[-] WhatAmLemmy@lemmy.world 12 points 5 months ago* (last edited 5 months ago)

I was thinking about this and imagined the federated servers handling the index db, search algorithms, and search requests, but instead leverage each users browser/compute to do the actual web crawling/scraping/indexing; the server simply performing CRUD operations on the processed data from clients to index db. This approach would target the core reason why search engines fail (cost of scraping and processing billions of sites), reduce the costs to host a search server, and spread the expense across the user base.

It also may have the added benefit of hindering surveillance capitalism due to a sea of junk queries from every client, especially if it were making crawler requests from the same browser (obviously needs to be isolated from the users own data, extensions, queries, etc). The federated servers would also probably need to operate as lighthouses that orchestrate the domains and IP ranges to crawl, and efficiently distribute the workload to client machines.

[-] muntedcrocodile@lemm.ee 4 points 5 months ago

Shit man thats exactly the kind of implementation i was thinking about. Had the idea for a couple years now but now that the fediverse is starting to gain traction i think it's probably about time some code gets written. Unfortunatly due to CORS u cant just start serving people a js script that starts indexing in the background.

[-] intensely_human@lemm.ee 2 points 5 months ago

The theory with crawling is it has discovery built into it, no? You follow outbound links and discover domains that way. So you need some seeds, but otherwise you discover based on what other people already know about.

To me the problem seems like a few submarines in a cave. They can each see a little bit of what’s around them, and then they can share maps. Like the minimum knowledge of the internet is one’s own explorations. As one browses the web, their sensors are storing everything they see. It also actively searches with other agents, automatically crawls on its own like active sensors on a submarine always mapping out the environment.

Then, in the presence of other friendly subs, you can trade information. So one’s own personal and small map of the internet can get merged and mixed with others to get a more and more complete version.

Obviously this can be automated and batched, but that’s sort of the analogy I see in the real world: multiple parties exploring an unknown/changing space and sharing their data to make a map.

this post was submitted on 23 May 2024
257 points (96.7% liked)

Asklemmy

43791 readers
765 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS