339
submitted 1 year ago* (last edited 1 year ago) by SteelBeard@lemmy.world to c/youshouldknow@lemmy.world

Why YSK: Beehaw defederated from Lemmy.World and Sh.itjust.works effectively shadowbanning anyone from those instances. You will not be able to interact with their users or posts.

Edit: A lot of people are asking why Beehaw did this. I want to keep this post informational and not color it with my personal opinion. I am adding a link to the Beehaw announcement if you are interested in reading it, you can form your own views. https://beehaw.org/post/567170

you are viewing a single comment's thread
view the rest of the comments
[-] t0e@lemmy.world 1 points 1 year ago

I think that makes a lot of sense and it's exactly the kind of stuff we should be considering at this stage. I also agree that humans are the ideal source of empathy and the best way to get around systems of secret code words and other methods that are used to circumvent algorithmic control.

But I also think AI-generated algorithms have their place. By design, content moderation is an unpaid task. Many volunteers are very good at moderation, but the work takes up a lot of their time and some of the best minds may decide to step away from moderation if it becomes to burdensome. On reddit, I saw a lot of examples of moderators who, as flawed humans, made choices that were not empathetic, but rather driven by a desire for power and control. Of course, if we make mistakes during the algorithm training process and allow our AI to be trained on the lowest common denominator of moderators, the algorithm may end up being just as power hungry - or even worse, considering that bots do not ever tire or log off.

But I do think there are ways to get past that, if we're careful about how we implement such systems. While depending on your definition, bots may not be capable of empathy, based on some conversations with AI chatbots, I think AI can be trained to very closely simulate empathy. But as you mentioned about secret messages, bots will likely always be behind the curve when it comes to recognizing dog whistles and otherwise obfuscated hate speech. But as long as we always have dedicated empathetic humans taking part, the AI should be able to catch up quickly whenever a new pattern emerges. We may even be able to tackle these issues by sending our own bots into enemy territory and learning the dog whistles as they're being developed, though there could be negative side effects to this strategy as well.

I think my primary concern when pushing for these kinds of algorithms is to make sure we don't overburden moderation teams. I've worked too long in jobs where too much was expected for too little pay, and all the best and brightest left for greener pastures. I think the best way to make moderation rewarding is to automate the most obvious choices. If someone is blasting hate speech, a bot can be very certain that the comment should be hidden and a moderator can review the bot's decision at a later time if they wish. I just want to get the most boring repetitive tasks off of moderators' plates so they can focus on decisions that actually require nuance.

Something I really like about what you said was the idea of promoting choice. I was on a different social media platform lately, one which has a significant userbase of minors and therefore needs fast over-tuned moderation to limit liabilities (Campfire, the communication tool for Pokémon Go). I was chatting with a friend and a comment I thought was mundane got automatically blocked because it contained the word "trash." Now, I think this indicates they are using a low quality AI, because context clues would have shown a better AI that the comment was fine. In any case, I was immediately frustrated because I thought my friend would get the impression that I said something really bad, because my comment was blocked. Except I soon found out that you can choose to see hidden comments by clicking on them. Without the choice of seeing the comment, I felt hate towards the algorithm. But when presented with the choice of seeing censored comments, my opinion immediately flipped and I actually appreciated the algorithm because it provides a safe platform where distasteful comments are immediately blocked so the young and impressionable can't see them, but adults are able to remove the block to see the comments if they desire.

I think we can take this a step further and have automatically blocked comments show categories of reasons why they were blocked. For example, I might never want to click on comments that were blocked due to containing racial slurs. But when I see comments blocked because of spoilers, maybe I do want to take a peek at select comments. And maybe for general curse words, I want to remove the filter entirely so that on my device, those comments are never hidden from me in the first place. This would allow for some curating of the user experience before moderators even have a chance to arrive on the scene.

On the whole, I agree with you that humans are the ideal. But I am fearful of a future where bots are so advanced, we have no way to tell what is a human account and what is not. Whether we like it or not, moderators may eventually be bots - not because the system is designed that way but because many accounts will be bots and admins picking their moderation staff won't be able to reliably tell the difference.

The most worrisome aspect of this future, in my mind, will be the idea of voting. A message may be hidden because of identified hate speech, and we may eventually have an option for users to vote whether the comment was correctly hidden or if the block should be removed. But if a majority of users are bots, a bad actor could have their bot swarm vote on removing blocks from comments that were correctly hidden due to containing hate speech. Whether it happens at the user level or at the moderator level, this is a risk. So, in my mind, one of the most important tasks we will need AI to perform is identifying other AI. At first, humans will be able to identify AI by the way they talk. But chatbots will become so realistic that eventually, we will need to rely on clues that humans are bad at detecting, such as when a swarm of bots perform similar actions in tandem, coordinating in a way that humans do not.

And I think it's important we start this work now, because if the bots controlled by the opposition get good enough before we are able to reliably detect them, our detection abilities will always be behind the curve. In a worst case scenario, we would have a bot that thinks the most realistic swarms of bots are all human and the most fake-sounding groups of humans are all bots. This is the future I'm most concerned about heading off to make sure it doesn't happen. I know the scenario is not palatable, and at this stage it may feel better to avoid AI entirely, but I think bots taking over this platform is a very real possibility and we should do our best to prevent it.

this post was submitted on 01 Jul 2023
339 points (92.9% liked)

You Should Know

33205 readers
123 users here now

YSK - for all the things that can make your life easier!

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must begin with YSK.

All posts must begin with YSK. If you're a Mastodon user, then include YSK after @youshouldknow. This is a community to share tips and tricks that will help you improve your life.



Rule 2- Your post body text must include the reason "Why" YSK:

**In your post's text body, you must include the reason "Why" YSK: It’s helpful for readability, and informs readers about the importance of the content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding non-YSK posts.

Provided it is about the community itself, you may post non-YSK posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you harass or discriminate against any individual member, you will be removed.

If you are a member, sympathizer or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people and you were provably vocal about your hate, then you will be banned on sight.

For further explanation, clarification and feedback about this rule, you may follow this link.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- The majority of bots aren't allowed to participate here.

Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.



Partnered Communities:

You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.

Community Moderation

For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.

Credits

Our icon(masterpiece) was made by @clen15!

founded 1 year ago
MODERATORS