this post was submitted on 15 Apr 2025
1503 points (98.3% liked)

Technology

68918 readers
6987 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Original post: https://bsky.app/profile/ssg.dev/post/3lmuz3nr62k26

Email from Bluesky in the screenshot:

Hi there,

We are writing to inform you that we have received a formal request from a legal authority in Turkey regarding the removal of your account associated with the following handle (@carekavga.bsky.social) on Bluesky.

The legal authority has claimed that this content violates local laws in Turkey. As a result, we are required to review the request in accordance with local regulations and Bluesky's policies.

Following a thorough review, we have determined that the content in question violates local laws in Turkey, as outlined in the legal request. In compliance with these legal provisions, we have restricted access to your account for users.

you are viewing a single comment's thread
view the rest of the comments
[–] Fredthefishlord@lemmy.blahaj.zone 1 points 1 day ago* (last edited 1 day ago) (1 children)

I don't think your distinction between moderation and filters is correct.

I would say it's closer to filters are something to curate what you see, and moderation is to curate the community.

Trust based systems are absolutely an amazing idea, but it is a very good idea to make it so once you have a level of 'trust' you don't just apply filters, but actual moderation instead. Having a hybrid system, where you can apply filters from a certain set of users, but also allow purely trolling or immoral(ie cp and gore) to be fully removed so that new users or visitors to the community do not have issues.

It also serves as a stronger prevention measure against racists and nazis.

It also serves to preserve the correction function of communities, without allowing popular will to reject reasonable expectations just because they dislike them. That has happened a lot in redditor communities.

I strongly agree that purely centralized moderation is bad, but some level of centralization of moderation is beneficial.

You could also have trust go by community rather than platform based to prevent exploitation

I would say it’s closer to filters are something to curate what you see, and moderation is to curate the community.

That's the traditional definition, sure. Traditional moderation essentially forces others to not see certain content based on the moderator's opinion, and that's incompatible with a properly P2P application where there is no central authority.

Perhaps here's a more satisfactory definition:

  • filter - data is stored, but not shown
  • moderation - data is not stored (or stored separately)

So an individual client wouldn't have the CP, gore, etc content for a given community because it has been moderated out. However, another user with different moderation settings might still have that content on their machine. If most people in the network remove the content, then the content is effectively gone since it won't be shared, but there's no guarantee that nobody has that content. Content nobody sees value in will disappear, since things are only kept when someone wants it.

Make sense? The only exception here is for that moderation queue, so you might have that content depending on your settings, but it wouldn't be shared with others (client feature).

I strongly agree that purely centralized moderation is bad, but some level of centralization of moderation is beneficial.

Perhaps. I just have trouble figuring out how to decide who the moderators are in a way that doesn't lead to the problem of new users flooding a community and kicking out existing users, so elections are out. There are no admins to step in to mediate disputes or recover from a hostile takeover.

So my solution leans on the users to generally prefer to not associate w/ scammers, spammers, pedophiles, etc, and that disassociation would help them benefit from the moderation efforts of their peers that think similarly to them. However, this also means that Nazis, pedophiles, etc can use the platform to find like-minded people. But the only people impacted by their nonsense should be those who believe similarly to them, since other users wouldn't see their content.

So we'll still end up with silos, but they'll be silos that users choose. If they don't like what they're seeing, they have the tools to fix that. Hopefully this is good enough that most people will get what they want, and in a way with user-driven censorship instead of platform-driven censorship.

The nice thing about this setup is that we can add centralized moderation if we choose in the form of public filter lists. It would be completely opt-in, and clients could be tuned for that use-case. But because of its distributed nature, there's no protection at the protocol level to prevent undesirable people from forking the client and removing those types of filters, in much the same way that Lemmy doesn't prevent someone from ignoring all moderation.

I'm open to suggestions. I also don't like the idea of Nazis and child abusers using my platform, but the distributed nature means nobody has any form of top-down control. Either we elect a moderator (which is subject to bots and whatnot), or we remove the concept of moderator entirely.

I think we'll end up with accounts that people can trust completely, such as bots that identify CP, gore, extremism, etc, and then you can just explicitly add them to your trusted moderator list. And I'll probably add something like that to the codebase once it's created. But yeah, it's a tricky problem to solve, and I'm trying to lean on reduced centralization when I have to make a choice.