When I've mentioned this issue to admins at lemmy.ca and endlesstalk.org (relevant posts here and here), they've suggested it's a misconfiguration. When I said the same to lemmy.world admins (relevant comment here), they also suggested it was misconfig. I mentioned it again recently on the LW channel, and it was only then was Lemmy itself proposed as a problem. It happens on plenty of servers, but not all of them, so I don't know where the fault lies.
They'll all POST requests. I trimmed it out of the log for space, but the first 6 requests on the video looked like (nginx shows the data amount for GET, but not POST):
ip.address - - [07/Apr/2024:23:18:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address- - [07/Apr/2024:23:18:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:14 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:14 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
If I was running Lemmy, every second line would say 400, from it rejecting it as a duplicate. In terms of bandwidth, every line represents a full JSON, so I guess it's about 2K minimum for the standard cruft, plus however much for the actual contents of comment (the comment replying to this would've been 8K)
My server just took the requests and dumped the bodies out to a file, and then a script was outputting the object.id, object.type and object.actor into /tmp/demo.txt (which is another confirmation that they were POST requests, of course)
I can't re-produce anything, because I don't run Lemmy on my server. It's possible to infer that's it's related to the software (because LW didn't do this when it was on 0.18.5). However, it's not something that, for example, lemmy.ml does. An admin on LW matrix chat suggested that it's likely a combination of instance configuration and software changes, but a bug report from me (who has no idea how LW is set up) wouldn't be much use.
I'd gently suggest that, if LW admins think it's a configuration problem, they should talk to other Lemmy admins, and if they think Lemmy itself plays a role, they should talk to the devs. I could be wrong, but this has been happening for a while now, and I don't get the sense that anyone is talking to anyone about it.
Hi
Pls check how much traffic you're now sending out for every activity - my server is recording that everything from lemmy.world is being 4 times (e.g. 1 Upvote is sent 4 times to every instance that has a subscriber. Those instances will reject 3 of them for being dupes, but it's still a lot to be sending out).
lemmy.ca had a problem where they were sending everything 3 times, and it was because they were running 3 containers, and they all had the same index number, so maybe it's that.
Thanks.
"It really shouldn't do that" was Microsoft's slogan for awhile, I think.
Screenshotting a post from another fediverse app seems a bit crazy. As an alternative, this post is available natively in lemmy, as text and from the original author (so you can reply to him if you'd like).
I can't give a universal link to a post obviously, but if you're on lemmy.world, it's here: https://lemmy.world/post/11631169, and if you're not, it's available on the !tails@lemmon.website community.
Shift ends in 10 minutes? Whatever it is, it's been the next shift's problem for 50 minutes already.
Disclaimer:
Rebel Moon is a work of fiction. Any resemblance to an actual Star War, living or dead, is purely coincidental
Edgar Wright said a similar thing recently: that the best thing they could do with superhero films is take a break, and wait for audiences to become excited about them again.
They've already replied with the reasons, but - for future reference - if you want to see specifics of things like this, a censure is often posted to https://fediseer.com. .world's censure of .nl is here
Metric stormtroopers are about a meter off-target, but Imperial ones are only about a yard, so they're a minor improvement.
Hmmm. Speaking of Fediverse interoperability, platforms other than yours (Pandacap) typically arrange things so that
https://pandacap.azurewebsites.net
was the domain, and something likehttps://pandacap.azurewebsites.net/users/lizard-socks
was the user, but Pandacap wants to usehttps://pandacap.azurewebsites.net
for both. Combined with the fact that it doesn't seem to support /.well-known/nodeinfo means that no other platform knows what software it's running.When your actor sends something out, it uses the id
https://pandacap.azurewebsites.net/
, but when something tries to look that up, it returns a "Person" with a subtly different id ofhttps://pandacap.azurewebsites.net
(no trailing slash). So there's the potential to create the following:https://pandacap.azurewebsites.net/
sends something out.https://pandacap.azurewebsites.net
)https://pandacap.azurewebsites.net/
sends else something out. Instance looks in it's DB, finds nothing, so looks it up and tries to create it again. The best case is that it meets a DB uniqueness constraint, because the ID it gets back from that lookup does actually exist (so it can use that, but it was a long way around to find it). The worst case - when there's no DB uniqueness constraint -is that a 'new' user is created every time.If every new platform treats the Fediverse as a wheel that needs to be re-invented, then the whole project is doomed.