277
submitted 1 month ago* (last edited 3 weeks ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

We're aware of ongoing federation issues for activities being sent to us by lemmy.ml.

We're currently working on the issue, but we don't have an ETA right now.

Cloudflare is reporting 520 - Origin Error when lemmy.ml is trying to send us activities, but the requests don't seem to properly arrive on our proxy server. This is working fine for federation with all other instances so far, but we have seen a few more requests not related to activity sending that seem to occasionally report the same error.

~~Right now we're about 1.25 days behind lemmy.ml.~~

You can still manually resolve posts in lemmy.ml communities or comments by lemmy.ml users in our communities to make them show up here without waiting for federation, but this obviously is not something that will replace regular federation.

We'll update this post when there is any new information available.


Update 2024-11-19 17:19 UTC:

~~Federation is resumed and we're down to less than 5 hours lag, the remainder should be caught up soon.~~

The root cause is still not identified unfortunately.


Update 2024-11-23 00:24 UTC:

We've explored several different approaches to identify and/or mitigate the issue, which included replacing our primary load balancer with a new VM, updating HAproxy from the latest version packaged in Ubuntu 24.04 LTS to the latest upstream version, finding and removing a configuration option that may have prevented logging of certain errors, but we still haven't really made any progress other than ruling out various potential issues.

We're currently waiting for lemmy.ml admins to be available to reset federation failures at a time when we can start capturing some traffic to get more insights on the traffic that is hitting our load balancer, as the problem seems to be either between Cloudflare and our load balancer, or within the load balancer itself. Due to real life time constraints, we weren't able to find a suitable time this evening, we expect to be able to continue with this tomorrow during the day.

As of this update we're about 2.37 days behind lemmy.ml.

We are still not aware of similar issues on other instances.


Update 2024-11-25 12:29 UTC:

We have identified the underlying issue, where a backport for a bugfix resulting in crashes in certain circumstances was accidentally reverted when another backport was applied. We have applied this patch again and we're receiving activities from lemmy.ml again. It may take an hour or so to catch up, but this time we should reliably be getting there again. We're currently 4.77 days behind.

We still don't have an explanation why the logs were missing in HAproxy after going through Cloudflare, but this shouldn't cause any further federation issues.


Update 2024-11-25 14:31 UTC:

Federation has fully caught up again.

you are viewing a single comment's thread
view the rest of the comments
[-] Deestan@lemmy.world 12 points 1 month ago

Do these things usually happen from time to time?

I've noticed some lemmy.ml communities looking surprisingly "dead" some days here and there but not thought much of it.

[-] MrKaplan@lemmy.world 10 points 1 month ago

I wouldn't say usually, but they can happen from time to time for a variety of reasons.

It can be caused by overly aggressive WAF (web application firewall) configurations, proxy server misconfigurations, bugs in Lemmy and probably some more.

Proxy server misconfiguration is a common one we've seen other instances have issues with from time to time, especially when it works between Lemmy instances but e.g. Mastodon -> Lemmy not working properly, as the proxy configuration would only be specifically matching Lemmys behavior rather than spec-compliant requests.

Overly aggressive WAF configurations tend to usually being a result of instances being attacked/overloaded either by DDoS or aggressive AI service crawlers.

Usually, when there are no configuration changes on either side, issues like this don't just show up randomly.

In this case, while there was a change on the lemmy.ml side and we don't believe a change on our side fell into the time this started happening (we don't have the exact date for when the underlying issue started happening), while the behavior on the sending side might have changed with the Lemmy update, and other instances might just randomly not be affected. We currently believe that this is likely just exposing an issue on our end that already existed prior to changes on lemmy.ml, except the specific logic was previously not used.

this post was submitted on 19 Nov 2024
277 points (96.6% liked)

Lemmy.World Announcements

28383 readers
154 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news ๐Ÿ˜

Outages ๐Ÿ”ฅ

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations ๐Ÿ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS