this post was submitted on 22 Aug 2025
140 points (97.9% liked)

Fediverse

36273 readers
341 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)

founded 2 years ago
MODERATORS
 

"Antiyanks" is back at it again and has switched tactics to spamming a massive number of comments in a short period of time. In addition to being annoying (and sad and pathetic), it's having a deleterious effect on performance and drowns out any discussions happening in those posts. That spam also federates as well as the eventual removals, so it's not limited to just the posts being targeted.

Looking at the site config for the home instance of the latest ~~two~~ three alts, the rate limits were all 99999999. πŸ€¦β€β™‚οΈ

Rate limits are a bit confusing, but they mean: X number of requests per Y seconds per IP address.

The comment API endpoint has its own, dedicated bucket. I don't recall the defaults, but they're probably higher than you need unless you're catering to VPN users who would share an IP.

Assuming your server config is correctly passing the client IP via the XFF header, 20 calls to the /create_comment endpoint per minute (60 seconds) per client IP should be sufficient for most cases, though feel free to adjust to your specific requirements.

Edit: A couple of instances accidentally set the "Messages" bucket too low. That bucket is a bit of a catch-all for API endpoints that don't fit a more specific bucket. You'll want to leave that one relatively high compared to the rest. It's named "Messages" but it covers far more than just DMs.

you are viewing a single comment's thread
view the rest of the comments
[–] Sal@mander.xyz 4 points 12 hours ago (1 children)

So, a β€˜Comments’ Rate limit: 10, Per second: 60, means a maximum of 10 comments per minute, correct?

Correct, per client IP.

Setting the limits to more reasonable values, like '20 posts per minute', causes the server to stop serving posts. My front page goes blank.

So, I am starting to think that '20 pots per minute' means 'requesting 20 posts per minute' and not 'creating 20 posts per minute'.

I am still having doubts about what these limits mean, but setting reasonable numbers seems to break things, unfortunately.

[–] admiralpatrick@lemmy.world 6 points 11 hours ago (1 children)

I replied to your other comment, but most likely cause is the API server not getting the correct client IP. If that's not setup correctly, then it will think every request is from the reverse proxy's IP and trigger the limit.

Unless they're broken again. Rate limiting seems to break every few releases, but my instance was on 0.19.12 before I shut it down, and those values worked.

[–] Sal@mander.xyz 4 points 11 hours ago (1 children)

Thanks! Yes, I saw both messages and I am now going through the NGINX config and trying to understand what could be going on. To be honest, Lemmy is the hobby that taught me what a 'reverse proxy' and a 'vps' are. Answering a question such as 'Are you sending the client IP in the X-Forwarded-For header?' is probably straight forward for a professional but for me it involves quite a bit of learning πŸ˜…

At location /, my nginx config includes:

      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

So, I think that the answer to your question is probably 'yes'. If you did have these rate limits and they were stable, the more likely explanation is that something about my configuration is sub-optimal. I will look into it and continue learning, but I will need to keep my limits a bit high for the time being and stay alert.

[–] admiralpatrick@lemmy.world 3 points 11 hours ago* (last edited 11 hours ago) (1 children)

Yeah, you are setting it, but that's assuming the variable $proxy_add_x_forwarded_for has the correct IP. But the config itself is correct. Is Nginx directly receiving traffic from the clients, or is it behind another reverse proxy?

Do you have a separate location block for /api by chance, and is the proxy_set_header directive set there, too? Unless I'm mistaken, location blocks don't inherit that from the / location.

[–] Sal@mander.xyz 3 points 11 hours ago (1 children)

Yes, I see this there. Most of the nginx config is from the 'default' nginx config in the Lemmy repo from a few years ago. My understanding is somewhat superficial - I don't actually know where the variable '$proxy_add_x_forwarded_for' gets populated, for example. I did not know that this contained the client's IP.

    # backend
    location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
      proxy_pass http://0.0.0.0:8536/;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";

      # Rate limit
      limit_req zone=mander_ratelimit burst=30000 nodelay;

      # Add IP forwarding headers
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

I need to do some reading 😁

[–] admiralpatrick@lemmy.world 4 points 11 hours ago (1 children)

https://nginx.org/en/docs/http/ngx_http_proxy_module.html

$proxy_add_x_forwarded_for is a built-in variable that either adds to the existing X-Forwarded-For header, if present, or adds the XFF header with the value of the built-in $remote_ip variable.

The former case would be when Nginx is behind another reverse proxy, and the latter case when Nginx is exposed directly to the client.

Assuming this Nginx is exposed directly to the clients, maybe try changing the bottom section like this to use the $remote_addr value for the XFF header. The commented one is just to make rolling back easier. Nginx will need to be reloaded after making the change, naturally.

     # Add IP forwarding headers
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-For $remote_addr;
[–] Sal@mander.xyz 3 points 10 hours ago (1 children)

Thanks!

I was able to crash the instance for a few minutes, but I think I have a better idea of where the problem is. Ths $emote_addr variable seems to work just the same.

In the rate limit options there is a limit for ''Message''. Common sense tells me that this means 'direct message', but setting this to a low number is quite bad. While testing I eventually set it to '1 per minute' and the instance became unresponsive until I modified the settings in the database manually. If I give a high number to this setting then I can adjust the other settings without problem.

[–] admiralpatrick@lemmy.world 3 points 10 hours ago (1 children)

"Message" bucket is kind of a general purpose bucket that covers a lot of different endpoints. I had to ask the lemmy devs what they were back when I was adding a config section in Tesseract for the rate limits.

These may be a little out of date, but I believe they're still largely correct:

[–] Sal@mander.xyz 3 points 10 hours ago (2 children)

So, ultimately my problem was that I was trying to set all of the limits to what I thought were "reasonable" values simultaneously, and misunderstood what 'Message' meant, and so I ended up breaking things with my changes without the reason being obvious to me. I looked into the source code and I can see now that indeed 'Messages' refer to API calls and not direct messages, and that there is no 'Direct Message' rate limit.

If I let 'Messages' stay high I can adjust the other values to reasonable values and everything works fine.

Thanks a lot for your help!! I am surprised and happy it actually worked out and I understand a little more 😁

[–] BlueEther@lemmy.nz 2 points 9 hours ago (1 children)

Hi I think I set the messages too low as well and now no.lastname.nz is down, pointers on how to fix with no frontend?

[–] admiralpatrick@lemmy.world 3 points 8 hours ago* (last edited 8 hours ago) (1 children)

If you have DB access, the values are in the local_site_rate_limit table. You'll probably have to restart Lemmy's API container to pick up any changes if you edit the values in the DB.

100 per second is what I had in my configuration, but you may bump that up to 250 or more if your instance is larger.

[–] BlueEther@lemmy.nz 3 points 8 hours ago

local_site_rate_limit

Thanks: UPDATE local_site_rate_limit SET message = 999, message_per_second = 999 WHERE local_site_id = 1;

[–] admiralpatrick@lemmy.world 2 points 10 hours ago (1 children)
[–] Sal@mander.xyz 2 points 10 hours ago

😁 πŸ‘