this post was submitted on 11 Jun 2025
88 points (100.0% liked)

Fediverse vs Disinformation

1360 readers
337 users here now

Pointing out, debunking, and spreading awareness about state- and company-sponsored astroturfing on Lemmy and elsewhere. This includes social media manipulation, propaganda, and disinformation campaigns, among others.

Propaganda and disinformation are a big problem on the internet, and the Fediverse is no exception.

What's the difference between misinformation and disinformation? The inadvertent spread of false information is misinformation. Disinformation is the intentional spread of falsehoods.

By equipping yourself with knowledge of current disinformation campaigns by state actors, corporations and their cheerleaders, you will be better able to identify, report and (hopefully) remove content matching known disinformation campaigns.


Community rules

Same as instance rules, plus:

  1. No disinformation
  2. Posts must be relevant to the topic of astroturfing, propaganda and/or disinformation

Related websites


Matrix chat links

founded 11 months ago
MODERATORS
 

I think we all know by now that major social media platforms in the West are the target of multiple astroturfing and psyop campaigns by both private and state actors.

This post, while obvious in implication, is important as it is the first time I have seen this fact discussed on a major site without receiving a large volume of accusations of conspiratorial thought in recent memory. I think there is also an important meta-discussion to be had regarding our role in combating such campaigns as fediverse denizens.

Obviously, we don't have the manpower to oppose things like this directly. There is additionally the unfortunate reality that we are not as immune here as we might like to think. I personally believe the fediverse likely is subject to similar astroturfing and that to believe otherwise is naive. However, even if there is no major targeting of sites like Lemmy, we are still subject to a trickledown effect from the major social media sites. Popular opinion will be swayed here indirectly by these campaigns regardless of if we are targeted specifically or not.

How can we protect our communities and more importantly our societies?

you are viewing a single comment's thread
view the rest of the comments
[–] Novocirab@feddit.org 10 points 4 days ago* (last edited 4 days ago) (3 children)

Astroturfing on the fediverse will probably take a different form for the time being: Since people here are above-average politically minded, with a robust tendency to the left, the attempts will probably be aimed mostly at distracting, derailing, and sowing discord and doubt. Stifling any nascent initiative, rather than garnering sympathies for anything in particular. Much like described in this post from yesterday.

How to guard against this... It feels to me like it will help vastly if as many of us as possible are also engaged in Matrix/Element channels, i.e. either channels specific to instances, or channels specific to topics (computing, politics, ecology...). Especially those who run popular communities. This would strengthen an implicit "web of trust", in that people will over time build a better impression of whom they're dealing with (after all, it's one thing to publish astroturfing posts, but a different thing to simultaneously entertain semi-personal relationships in a chatroom while never raising doubts about your earnesty). Also, whenever some of us erroneously start to mistrust each other for whatever reason, being in touch over a second channel will give us a better chance at sorting things out before a lasting rift occurs.

[–] Corgana@startrek.website 4 points 4 days ago

Reddit mods can sniff out astroturfing pretty easily actually, but Reddit inc doesn't do much to stop it. On the Fediverse, admins can simply ban from the instance, and if an instance does a poor job of removing inauthentic content then they can defederate.

[–] Kyrgizion@lemmy.world 3 points 4 days ago

I'd say that Lemmy's current userbase is highly reminiscent of early Reddit's userbase (pre 2012ish).

[–] Maeve@kbin.earth 1 points 4 days ago (1 children)
[–] Novocirab@feddit.org 3 points 4 days ago* (last edited 4 days ago) (1 children)

To be clear, what I proposed above doesn't give full protection against targeted false-flag campaigns (what does?). But it does increase their personnel costs for such campaigns to be successful and it gives us a better chance to avoid devouring ourselves out of false suspicions.

[–] Maeve@kbin.earth 2 points 4 days ago (1 children)

Maybe. I was in those chats and paranoia and suspicion abounded before Sabu showed up. Not that that's entirely bad, but it didn't prevent Sabu, just saying.

Eta, I personally am not really interested in participating in chats anymore. Not saying I never would, just that I need more IRL rn.

[–] Novocirab@feddit.org 2 points 4 days ago (1 children)

I'll also add that what I have in mind is discussions about politics and political strategies. If I read you right, the chats you mention were dealing with activities whose legality was at least questionable (in which case heavy paranoia among those involved would probably be inevitable).

[–] Maeve@kbin.earth 3 points 4 days ago

They were largely political. Anything "criminal” discussed in the beginning was how to give regular people more access to information, and real solutions to RL problems. As those channels grew, ideas were necessarily diversified, some more radical, some pretty vanilla.

Power criminalizes anything that may lead to a concession of that power. Something something asked nicely, etc. AND the larger those channels grew, so too did more bad actors with ill intent from the jump, whether LE, political disruptors, or outright chaos goblins.

In short nothing is risk free, but LE is more of a threat than any other bad actors, because protests will be criminalized, mutual aid will be criminalized, reporting will be criminalized, recording, anything. And it already is, defacto if not in writ. But it serves no one to demonize everyone.