26

!endlesswar@lemmy.ca

Seems to be purely to post misinformation with repeated claims that Russia is innocent and the US caused the Ukraine situation, that they're stopping Ukraine from agreeing to Russia's super amazing peacedeals, etc.

This is the sort of garbage one would expect to find on ML or Hex, is CA intended to be the same low quality instance?

all 30 comments
sorted by: hot top controversial new old
[-] Rottcodd@lemmy.world 18 points 3 weeks ago* (last edited 3 weeks ago)

While I agree with your assessment, I'd note that pretty much everyone at this point declares that whatever views they disagree with are "misinformation," so proactively banning things solely because someone has declared that they're "misinformation" isn't a sound strategy.

And again, I agree with that assessment in this case. But that's really beside the point.

[-] WrenFeathers@lemmy.world 16 points 3 weeks ago* (last edited 3 weeks ago)

There a huge difference between what one thinks is misinformation, and what is proven to be misinformation though. It shouldn’t be hard for admins to suss the difference.

[-] Rottcodd@lemmy.world 2 points 3 weeks ago

There a huge difference between what one thinks is misinformation, and what is proven to be misinformation though.

Epistemologically, yes. But for all practical purposes, at this point in time, there really isn't, since anyone can find sources that purportedly "prove" that whatever they want to believe is true and/or that whatever they don't want to believe is "misinformation." It makes absolutely no difference what the claim in question is - somebody somewhere online has "proven" that it's true, and somebody else somewhere online has "proven" that it's not.

So what that means is that to avoid the trap of endlessly dueling contradictory claims, somebody is going to have to simply decree what is or is not to be considered to be true - which sources and purported proofs are legitimate and which are not - and that's where it inevitably goes wrong.

And in fact, to go all the way back to the start of this thread, that's exactly how hexbear and ml work. They maintain their bubbles by essentially arbitrarily decreeing that [this] is true and [that] is misinformation. And if you press them on it, they're more than willing to post links to the "proof."

[-] howrar@lemmy.ca 1 points 3 weeks ago

How can any of this actually be proven to be misinformation? We're here on our couches reading second/third/nth hand information. None of us were in the rooms where these decisions were made. None of us are on the front lines. The best we can do is make an educated guess on who is a credible source, and that's especially difficult when everyone involved has an interest in lying about the situation when things don't go their way.

[-] sunzu2@thebrainbin.org 1 points 3 weeks ago

I am with you...

OP, if you see misinformation correct the record so that everyone else can see there is a dispute and they can make up their own mind.

Hard Suppression of opinions is not good IMHO

Let idiots speak and let their idea be challenged.

Tankies and political subs expose their bias via the modlog and that's the point.

Chanllange them and let them use that hammer so everyone can see their idiotic positions what they are

[-] running_ragged@lemmy.world 2 points 3 weeks ago

Hard Suppression of opinions is not good IMHO

Tankies and political subs expose their bias via the modlog and that’s the point.

If the modlog is that questionable, they should be given a fair chance to provide receipts for their censorship of users trying to correct the record, and if they can't do that fairly, it should be on the admins to remove a community that is both spreading misinformation and censoring verifiable corrections.

I feel it shouldn't be only up to the users filtering / monitoring modlogs of all the communities.

[-] KingOfTheCouch@lemmy.ca 13 points 3 weeks ago

First let me say: I'm glad you're making this observation. The shitastic propaganda machines need to be called out.

Now second, I have a point to make but I should give some context first. I already blocked that community and many others about politics and war and crap because I'm just tired of all of it. That said, before I did, it didn't seem like endless war was all (or even mostly) anti-ukraine in nature. It did seem anti-US but I wasn't about to go looking into that statistically - just an observation regarding the titles of posts that did percolate up my feed from time to time.

Now, I'm not an admin here, but if I were (based on experiences in past adminning forums, chatrooms and reddit subs) I would ask for citations and proof of the problem. It's all well and good to just say "go look for yourself" but people have jobs and other interests. I regularly would ignore these things, but if someone went to the effort to present proof? Then you had my interest. Like, don't reply to me with it, edit it back in your original post - like I say, I've blocked it and I've moved on.

I would be looking to know if there are multiple accounts posting shit or is it one or two "power users" posting the majority of it. Were the mods of the sub alerted to it? (Obvious moot point if it's the mods posting...) Showing the response from the mods will really sell what tone they're aiming for in the sub. Hell - if they have it written in their community header info that they are their to pander to putin, that's evidence too. "Seems to be", as you wrote, is highly conjectural.

Anyway, that was my two cents (and, yes, with the state of the dollar and the fact we have no pennies anymore that means it's worth fuck all.) I wish you good luck in your mission to expose the assholes of the world - we do need to stay vigilant.

And also because I might not have been clear: fuuuuuuuuuck russia.

[-] Nils@lemmy.ca 10 points 3 weeks ago

If you check the history of lemmy.ca, you will see that this intolerant propaganda/misinformation is cyclical. One user disappears and another comes in. Sadly, it usually takes a few months before something happens to them.

The mod of the community you shared, created an account and on the same day started to post propaganda and tiptoeing around the instance rules. Before, it was another user posting the same content from the same sources, with the same tactics, until they went a bit too far and were banned.

They always do the same thing, create an account, create a community with a “normal” name, like geopolitics, and start spreading stuff. I would not be surprised if it is an agency doing this kind of stuff: I cannot imagine someone being so evil to do it willingly, might be either coerced or depending on the money.

I do not mind when the propaganda is benign, like bots posting random video game or Canadian news, but I draw the line on intolerance.

Another of those intolerant people tactics (tiptoeing) is that they do not demand for people to get killed in their comments. They construct it as a consequence of the victims' actions, they deserve to die. For me, that is just as bad, if not worse, because they know the things they are doing are wrong, and they are trying hard to no get caught.

I hope the instance comes with a better and swift way to deal with these kinds of problems.

[-] avidamoeba@lemmy.ca 6 points 3 weeks ago

Unlike Xitter and Reddit where black box algorithms spread information to users' feeds, Lemmy uses people's vote to increase or decrease proliferation. It seems to me that the posts in that community aren't going anywhere given how people have voted on them. The primary filter seems to work as expected. Maybe there isn't need for another.

[-] Deceptichum@quokk.au 9 points 3 weeks ago* (last edited 3 weeks ago)

Having a less popular Nazi bar is still having a Nazi bar.

And communities take time to grow, it’s only a month old. There isn’t any benefit in letting it fester until it’s a bigger threat.

[-] avidamoeba@lemmy.ca 4 points 3 weeks ago* (last edited 3 weeks ago)

If the litmus is not having a Nazi bar, I don't think that'll ever happen unless we gate community creation. On the wider fediverse, it'll never happen. I think it'll always be about how unpopular it is and we should use that as the litmus. The scaleable approach is people's votes and personally this is why I'm on Lemmy.

[-] Deceptichum@quokk.au 4 points 3 weeks ago

So you suggest never trying to do anything, because you can’t be sure it will 100% be gone forever?

Think of it like a weed, when you see one you pull it out and move on with your day. Gardens require constant tending to.

[-] avidamoeba@lemmy.ca 3 points 3 weeks ago* (last edited 3 weeks ago)

I'm not suggesting that. I'm suggesting that we're already doing something and it seems effective. It seems to me the democratic process we have is already pulling the weed out.

[-] avidamoeba@lemmy.ca -2 points 3 weeks ago

I won't be mad if the admins delete that community, but it seems we're already controlling how far it gets. If I saw high upvote rates which makes the misinfo spread wide, then I'd say we're not doing a good enough job through the standard process and perhaps an additional action is needed.

[-] iii@mander.xyz 2 points 3 weeks ago

Sometimes the cure is worse than the disease.

[-] Deceptichum@quokk.au 3 points 3 weeks ago

Deleting a community is worse than Russia invading Ukraine?

Huh good to know.

[-] iii@mander.xyz 1 points 3 weeks ago

If we examine the two possible actions: (1) banning the community, and (2) not banning the community. Then we can see that neither (1) nor (2) have influence on Russia invading Ukraine. That has already happened and is happening.

Both actions, (1) and (2), are not worse than the invasion.

[-] Deceptichum@quokk.au 8 points 3 weeks ago

Russian propaganda has an impact on it.

Deliberately allowing yourself to be a hub to spread it is ridiculous.

[-] iii@mander.xyz -3 points 3 weeks ago* (last edited 3 weeks ago)

Russian propaganda has an impact on it.

I totally agree.

Deliberately allowing yourself to be a hub to spread it is ridiculous.

I think this argument is our main point of disagreement. In my opinion, censorship is a bad way to address misinformation. It creates parallel groups of thinking, each, from their point of view, growing more confident that they know all.

[-] Nils@lemmy.ca 4 points 3 weeks ago

I do not understand people here defending misinformation/intolerance as a merit of discussion. The dichotomy of naive or complicity.

People spreading misinformation and intolerance are not here for healthy arguments, you just need to check their history to see their dishonesty and ill temper.

In the meanwhile, accounts like the one OP highlighted are just creating trouble for mods of other instances to solve.

[-] Rentlar@lemmy.ca 4 points 3 weeks ago* (last edited 3 weeks ago)

The problem is, who is the arbiter of that? There are essentially 3 types of moderation styles here:

Laissez-faire: Let people do whatever as long as it doesn't actively hurt anyone. People can govern themselves and serious incidents are expected to be reported and dealt with. Some jerks will tiptoe around the rules but will eventually get caught. Lemm.ee, lemmy.ca and some others follow this.

Casual enforcement of admin-philosophy: Most topics outside of politically contentious ones are not strictly monitored. Mods/admins will root out communities, comments and posts that actively go against the narrative, particularly on threads on political topics like Ukraine, Palestine, etc.. Lemmy.world and lemmy.ml follow this.

Strict enforcement of admin-philosophy: do not tolerate any potentially harmful statements (to that instance's narrative or vibe). Any violation will be removed and repeated violations get you banned. This philosophy can be reasonable like Beehaw.org, which I think works very well for them and makes it a welcoming safe space, because there is no tolerance for bigotry and jerks. It can also be unreasonable like lemmygrad.ml, where dissent to the pro-Russian narrative is swiftly dealt with.

Admins of other instances should ban users that go against their philosophy from reaching their servers, if they follow the latter two styles of moderation. That's how it is with federation, sometimes different instances have conflicting philosophies (the vegan one for example). It's up to each admin to decide whether a foreign Fediverse user belongs in their kingdom. The moderation style that lemmy.ca has lets it be a good neutral place to discuss various drama and lore from other servers.

[-] Nils@lemmy.ca 4 points 3 weeks ago

The problem is, who is the arbiter of that?

Intolerance is well-defined in many languages, and, so people do not confuse I am talking about milk intolerance, the hate crime is defined in many law codes across the globe, including Canada. There is no need for philosophical discussion of what is "intolerance".

There is no need for a linguistic expert to realize someone's discourse is ill intentioned, when the semantics of "the victim deserves to suffer" is the same as the call to action.

For countries that depend on common law, the account in question was already punished in other instances, creating precedent.

The modus operandi of these kinds of accounts are also well-know and documented. And popularity contests should not be a tool to define what is right in an online platform where there is no real accountability. How many upvotes do you think a single worker in a troll farm can generate in a couple of minutes?

We should not depend on admin humour for results (philosophies, as you suggest), but I agree that we should help when/where we can, their volunteer work is invaluable for the health of the instance.

I think that the discussions worth having in these kinds of posts are about methods, checks and balance to prevent bad decisions from people in power, and that people will be fairly treated.

Methods are many, and there are many examples out there.

  • would twitter like community notes solve some of these problems or create more? Would lemmy repo accept such PR?
  • the problem of twitter x Brazil: is it worth locking those accounts while an investigation is pending? One of them was instigating machete attacks in school/nursery. When would this lock be ok, or not ok?
  • how long should people complain/report before a something (an investigation, a lock, or a conclusion) happens. - The account we both mentioned not in this thread (but in this post) went on it for 2 months before being banned - they did not leave on their own. ...
[-] Rentlar@lemmy.ca -1 points 2 weeks ago

Sure, we should not tolerate intolerance, "No Bigotry" is rule #1 here so if you see that then please report it. Misinformation, though? That's the main thing OP is talking about and they gave a few examples, they are propaganda but not intolerance.

[-] Nils@lemmy.ca 1 points 2 weeks ago* (last edited 2 weeks ago)

I feel like you are arguing with me about OP points. I am not sure if it is a Lemmy error, but my comment you replied to first

I do not understand people here defending misinformation/intolerance as a merit of discussion. The dichotomy of naive or complicity.

People spreading misinformation and intolerance are not here for healthy arguments, you just need to check their history to see their dishonesty and ill temper.

In the meanwhile, accounts like the one OP highlighted are just creating trouble for mods of other instances to solve.

I don't feel like you are here defending that person's acts or being complicit, neither trying to defend misinformation/intolerant with malicious intent, or being disingenuous with semantics. So, for the healthy discussion, I continue.

You don't need to go too far in that person's history to see the examples of their dishonesty and ill temper, if that is the hill you chose to defend. You might need some special privilege to see their removed content in other instances.

From your message, sorry if I am mistaken your words the first time, but I imagine now that by that you were not saying intolerance, but misinformation, as in:

who is the arbiter of "misinformation"

In that case,

Canada might be a little behind on misinformation laws, it was always behind when the subject involves technology. But they define very well their types (MDM they call), qualify damages and campaign to raise awareness and minimize its effects. https://www.cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300 https://www.canada.ca/en/campaign/online-disinformation.html

"Misinformation" is serious, causes harm, and should not be used interchangeably with "agreement".

Just because OP is complaining about misinformation, it does not make it any less severe than intolerance, when used for the same goal - to cause harm.

Even before technology, we had laws and procedures about harmful discourse, be them intolerance or misinformation, it just makes things different.

That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.

Edit: bold line

[-] Rentlar@lemmy.ca 1 points 2 weeks ago* (last edited 2 weeks ago)

That's reasonable. It's my bad that I was unclear with the use of that. It's okay for you to argue against spreading intolerance, but I refer that the main topic of the post is about misinformation, and even as you rightly argue that the two often can have shared purpose and goals, and I also agree with you that both should have clear boundaries set here as to what is allowed and disallowed, they are distinct concepts. To be clear, I'm not making the distinction between MDM and intolerance to excuse either of them. Misinformation is bad too, and I agree that we should inform and root it out where we find it. However the banhammer is a tool that can make any comment look like a nail, so care should be taken when it is used. Conflating removal of clearly intolerant takes with removal of possibly misinformed takes when it comes to enforcement actions, would be viewed as mod/admin abuse and lower users' trust in admins of that server.

The main example from the OP is the endlesswar community. The user there is pushing takes not fully related to "endlesswar" but are from other sources, questionable as some of them may be if we were to analyze each of them carefully. A separate example I have is a comrade I have seen around Lemmy since I joined, https://lemmy.ml/u/yogthos. This user has been constantly pushing narratives, to the point that one might think they could be paid to do it. Over the past couple years, they have become far more careful to avoid getting banned for intolerant takes, and now selectively posts articles and graphs that supports a specific narrative.

Do these users, or the users that might post a misinformed take within the power-users' posts deserve bans? Do we analyze every comment, post and news-source and remove those that meet the criteria for MDM? Do we have a whitelist/blacklist to only permit links to reputable news sites server-wide (to stop someone from creating a community where they allow themselves to post from wherever)? Lemmy.world news communities had a Media Bias Fact Check bot that was rather inaccurate and very unpopular.

That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.

I support a thorough discussion on how best to deal with it both locally and across the Fediverse. It's not "not a problem", but at the moment I don't see any fair solutions that don't rely on an undue amount of mod/admin discretion, besides removing intolerant takes and downvoting misinformed takes.

E: One solution could be like SlrPnk pleasant politics which instituted an AI moderator that checks and will issue temp bans for bad behaviour it detects. I'm still a little skeptical of it as to me it falls under "undue amount of mod/admin discretion" but at least it takes a lot of the tiring work for admins out of the equation.

[-] Nils@lemmy.ca 2 points 2 weeks ago

I imagine that the discussion we are having would be more beneficial to c/main

However the banhammer is a tool that can make any comment look like a nail,

That is why methods are important, if your only tool is a hammer, the screws will look like nails. And people waiting for a solution will be expecting a "thunk".

In real life, people do not get arrested for reposting fake news. Well, maybe if you call a war "war" in some countries. Correcting myself, in real life, people should not get arrested for reposting fake news.

Many people share them because they do not know better, are afraid, for many reasons. I have so many examples in my family. Usually, speaking with them with compassion and understanding, while using a common language works.

But there are people that benefit from it, instigators, bad actors. How long do you think it should be allowed to fester before you get to a point of no return?

From my experience, the places doing it properly without installing a censorship state, are the ones with well-defined and transparent process. They are doing proper investigations, working with the community, taking proper action against bad actors. Canada is not far from achieving it, it needs work, and I wish it was faster.

You cannot expect that an online community that depends on volunteer work would have the same level of scrutiny. I don't even know if it would be possible to create some sort of committee to oversee lemmy.ca like is common in some forums and other open-source communities.

Do these users, or the users that might post a misinformed take within the power-users’ posts deserve bans? Do we analyze every comment, post and news-source and remove those that meet the criteria for MDM? ....

Yeah, those are the questions that need to be discussed! And plans made over it.

I have facts, experience and opinion - for one, I am averse to mass scanning, even more without proper methods, but I have been proven wrong many times, if people think is the right way to go, I might as well understand it better and help where I can.

Back, to the community from OPs post

I imagine you are old enough in lemmy.ca to remember Geopolitics, trying to be more neutral, but the mod there would pin all his posts to hide people's post, or just delete them. The account was also accumulating bans across instances until they were banned fully in this instance. Endless war does not try to be neutral, and the mod accumulated an even longer rep sheet, and in less than 2 months.

I understand a human can read their posts, analyze their actions, and understand they are acting in bad faith intentionally, dishonestly arguing and ill temper when talking to people. I don't think an AI can classify this kind of thing consistently yet. But in quantitative analyze, there are enough bans and content removed to warranty at least an investigation and a warning.

...Over the past couple years, they have become far more careful to avoid...

Wow, that user was banned in so many communities and had a lot of content removed over time, including in ml.

They post many memes, but their reaction to people's comments are not the most amicable, they take everything as direct offence, that could surely be improved.

[-] Railcar8095@lemm.ee 1 points 1 week ago

I reached there accidentally. The mod u/humanspiral is completely deranged. He banned me with comment "pure evil NAFO scum" for saying "Real coincidence that Nazis and pro-russian are but hurt about this. Makes you wonder…" regarding the Rumanian nullification of the first round.

Amazingly, he's at least not going on a ban spree, even if all comments are negative. All the removed content is his own missinformation!

[-] Rentlar@lemmy.ca -1 points 3 weeks ago

We have had multiple communities of that nature, all heavily downvoted. A previous example was Geopolitics where Russian and Conservative narratives were pushed daily. Eventually the creator, poster and community mod gave up, left and deleted it.

You can always block communities or instances that you don't want to see. Every instance has its own policy on what types of communities are allowed, and how strict they must align with admin values.

I frequently disagree with Russian apologists, but pre-emptive restriction of viewpoints I don't like doesn't make for good discussion, even if in my eyes many of the arguments are on dubious ground. You see many complaints of lemmy.world and lemmy.ml admins enforcing certain policies and worldviews sitewide, which is fine for them to do, but not every server has to ascribe to that. Some servers restrict community creation to mods/admins, some like beehaw.org have a limited but curated set of communities. !conservative@lemm.ee is a hotbed of clown-take articles, doesn't mean I think they should be banned.

If you see posts with harmful misinformation, or harmful behaviour by the mod of such a community, please report it to the lemmy.ca admins. Demonstrating a pattern of harmful behaviour with evidence will get the mod and community banned.

I vehemently disagree with an article posted there that having their experimental Mach-whatever missiles means that Russia and China's going to get everything they want in a conflict, to me it's a total bluff. But it was written by a Canadian so it would seem it belongs there, even if it is indeed a blatantly pro-Russian narrative.

Perhaps as a solution going forward: new communities start as private communities for the instance only, and then upon admin review and approval can be federated to other instances?

this post was submitted on 27 Nov 2024
26 points (81.0% liked)

Lemmy.ca Support / Questions

483 readers
4 users here now

Support / Questions specific to lemmy.ca.

For support / questions related to the lemmy software itself, go to !lemmy_support@lemmy.ml

founded 4 years ago
MODERATORS