699
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Status update July 4th

Just wanted to let you know where we are with Lemmy.world.

Issues

As you might have noticed, things still won't work as desired.. we see several issues:

Performance

  • Loading is mostly OK, but sometimes things take forever
  • We (and you) see many 502 errors, resulting in empty pages etc.
  • System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Bugs

  • Replying to a DM doesn't seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
  • 2FA seems to be a problem for many people. It doesn't always work as expected.

Troubleshooting

We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @nutomic@lemmy.ml is also helping with current issues.

So, all is not yet running smoothly as we hoped, but with all this help we'll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!

top 50 comments
sorted by: hot top controversial new old
[-] Frostwolf@lemmy.world 73 points 1 year ago

This is the level of transparency that most companies should strive for. Ironic that in terms of fixing things, volunteer and passion projects seem to be more on top of issues compared to big companies with hundreds of employees.

[-] ruck_feddit@lemmy.ml 23 points 1 year ago

You said it: passion projects. While being paid is surely a motivator, seeing your pet project take off the way Lemmy is can be so intoxicating and rewarding! I plan to donate as soon as I get paid on Friday! I want to see this succeed, even if it is just to spite Reddit, and I am willing to pay for the pleasure.

[-] Deez@lemm.ee 54 points 1 year ago

Thanks for all of your effort. Even though we are on different instances, it’s important for the Fediverse community that you succeed. You are doing valuable work, and I appreciate it.

load more comments (1 replies)
[-] cristalcommons@lemmy.world 41 points 1 year ago

i just wanted to thank you for doing your best to fix lemmy.world as soon as possible.

but please, don't feel forced to overwork yourselves. i understand you want to do it soon so more people can move from Reddit, but i wouldn't like that Lemmy software and community developers overwork and feel miserable, as those things are some of the very motives you escaped from Reddit in first place.

in my opinion, it would be nice that we users understand this situation and, if we want lemmy so bad, we actively help with it.

this applies to all lemmy instances and communities, ofc. have a nice day you all! ^^

[-] Cinner@lemmy.world 15 points 1 year ago

Plus, slow steady growth means eventual success. Burnout is very real if you never take a break.

[-] cristalcommons@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

so true, pal! slowly, with patience, no rushing, putting love into it, organizing ourselves, working smart is better than working hard and fast.

because of the federated nature of fediverses like Lemmy, it is very possible that many people are doing the very same task without even knowing they are duping each other's efforts.

and that's sad because if they knew, they could be teaming up, or splitting the task in two, in order to avoid wasting different efforts into dupe results.

i have learnt a thing or two about burnout, it's better for me to make 40% planning, and 40 % self-care and so the 20 % of execution becomes piece of cake.

but this is just my opinion. anyway, please take care, pals <3

[-] czarrie@lemmy.world 41 points 1 year ago

I'm just excited to be back in the Wild West again -- all of the big players had bumps, at least this one is working to fix them.

[-] Today@lemmy.world 23 points 1 year ago
[-] G_Wash1776@lemmy.world 9 points 1 year ago

I’d rather have to deal with hiccups and bumps along the way, because the community only grows more each time.

load more comments (3 replies)
[-] AlmightySnoo@lemmy.world 30 points 1 year ago
[-] ruud@lemmy.world 22 points 1 year ago

Yes phiresky is working with us improving performance, has helped a lot so far! (We're now running a custom build with some of his improvements)

[-] AlmightySnoo@lemmy.world 19 points 1 year ago* (last edited 1 year ago)

Just tried upvoting most of the comments here, and this time all of the upvotes went through and were extremely fast! No more 502 errors!

[-] Coelacanth@lemmy.world 14 points 1 year ago

I've noticed lemmy.world has been much more responsive today so something seems to be working!

load more comments (1 replies)
[-] JoeKrogan@lemmy.world 8 points 1 year ago

Thanks for sharing. It was very interesting to see the graphs showing the before and after.

[-] nielsn@lemmy.world 29 points 1 year ago

Thank you for your effort!

[-] ruud@lemmy.world 26 points 1 year ago

You're welcome! (testing comments now... )

[-] nielsn@lemmy.world 9 points 1 year ago

It looks like it's working.

[-] FlyingSquid@lemmy.world 27 points 1 year ago

I am very forgiving of the bugs I encounter on Lemmy instances because Lemmy is still growing and it's essentially still in beta. I am totally unforgiving of Reddit crashing virtually every day after almost two decades.

load more comments (1 replies)
[-] repungnant_canary@vlemmy.net 25 points 1 year ago

The need to restart server every so often to avoid excessive ram usage bit is very interesting to me. This sounds like some issue with memory management. Not necessarily a leak, but maybe something like server keeping unnecessary references so the object cannot be dropped.

Anyway, from my experience Rust developers love debugging such kind of problems. Are Lemmy Devs aware of this issue? And do you publish server usage logs somewhere to look deeper into that?

[-] Azzu@lemm.ee 10 points 1 year ago

server keeping unnecessary references so the object cannot be dropped

You indeed just described a memory leak :D

load more comments (1 replies)
[-] Shartacus@lemmy.world 23 points 1 year ago

I want this to succeed so badly. I truly feel like it’s going to be sink or swim and will reflect how all enshitification efforts will play out.

Band together now and people see there’s a chance. Fail and we are doomed to corporate greed in every facet of our lives.

load more comments (3 replies)
[-] slimarev92@lemmy.world 22 points 1 year ago

How can I donate to lemmy.world?

[-] Taxxor@lemm.ee 12 points 1 year ago* (last edited 1 year ago)

Just look at the sidebar:

Donations
If you would like to make a donation to support the cost of running this platform, please do so at the mastodon. world donation URLs:

https://opencollective.com/mastodonworld
https://patreon.com/mastodonworld

[-] CuriousLibrarian@lemmy.world 9 points 1 year ago

Not an expert, but here is where I set up a recurring donation. Apparently Mastadon.world and Lemmy.world are run by the same admins. Took me a while to understand this, please correct me if I'm wrong.

load more comments (2 replies)
[-] Kalcifer@lemmy.world 21 points 1 year ago* (last edited 1 year ago)

That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%

Lemmy has a memory leak? Or, should I say, a "lemmory leak"?

load more comments (23 replies)
[-] TomFrost@lemmy.world 20 points 1 year ago

Cloud architect here— I’m sure someone’s probably already brought it up, but I’m curious if any cloud native services have been considered to take the place of what I’m sure are wildly expensive server machines. E.g. serve frontends from cloudfront, host the read-side API on Lambda@Edge so you can aggressively and regionally cache API responses, anything other than an SQL for the database — model it in DynamoDB for dirt cheap wicked speed, or Neptune for a graph database that’s more expensive but more featureful. Drop sync jobs for federated connections into SQS, have a lambda process that too, and it will scale as horizontally as you need to clear the queue in reasonable time.

It’s not quite as simple to develop and deploy as docker containers you can throw anywhere, but the massive scale you can achieve with that for fractions of the cost of servers or fargate with that much RAM is pretty great.

Or maybe you already tried/modeled this and discovered it’s terrible for you use case, in which case ignore me ;-)

[-] Olap@lemmy.world 19 points 1 year ago

You were so close until you mentioned trying to ditch SQL. Lemmy is 100% tied hard to it, and trying to replicate what it does without ACID and Joins is going to require a massive rewrite. More importantly - Lemmy's docs suggest a docker-compose stack, not even k8s for now, it's trying really hard not to tie into a single cloud provider and avoid having three cloud deployment scripts. Which means SQS, lambdas and cloudfront out in the short term. Quick question, are there any STOMP compliant vendors for SQS and lambda equivalent yet?

Also, the growth lemmy.world has seen has been far outside what any team could handle ime. Most products would have closed signups to handle current load and scale, well done to all involved!

load more comments (4 replies)
[-] b3nsn0w@pricefield.org 11 points 1 year ago

cloudfront helps a lot with the client and is absolutely compatible with lemmy if you set it up correctly. possibly it could also help cache api responses, i haven't looked into that part yet.

the database, on the other hand, would need a nearly full rewrite. lemmy uses postgres and dumping it for something else would be a huge pain for the entire federated community. it could probably tear it in half.

there's also the issue of pictrs, which uses a stateful container and isn't yet able to use an external database which would allow you to scale it horizontally. resolving that one is on the roadmap though, and for the most part you can aggressively cache the pictrs get requests to alleviate the read-side load.

but whatever the solution is, it kinda needs to be as simple as developing and deploying docker containers you can throw anywhere. the vendor-agnostic setup is a very important part of the open-source setup of lemmy. it's fine to build on top of that, but currently anyone with docker-compose installed can run the service and that really should be retained.

load more comments (1 replies)
[-] kokesh@lemmy.world 18 points 1 year ago

Thank you for everything! Can we donate to cover the costs? If more people throw in $5, we all will benefit. Can Lemmy server scale up without needing RAM exponentially growing with the user number? I hope the system will get better optimized for bigger user base as the time goes on..

load more comments (1 replies)
[-] erik1984@lemmy.world 17 points 1 year ago

Thanks again for all the hard work on Lemmy World. It feels fast today

[-] assassin_aragorn@lemmy.world 16 points 1 year ago

Thanks for giving us all these updates all the time!

[-] Jackolantern@lemmy.world 14 points 1 year ago

I feel that lemmy runs smoothly the past few hours with very few hiccups and mostly on the upvoting and commenting side. I encountered no issues yet on the loading of the post.

Thank you so much for all your hard work.

load more comments (1 replies)
[-] LeHappStick@lemmy.world 14 points 1 year ago* (last edited 1 year ago)

.world is definitely running smoother than when I joined 3 days ago, back then it was impossible to comment and the lag was immense, right now I just have to occasionally reload the page, but that's nothing in comparison.

You guys are doing an amazing work! I'm broke, so here are some ~~coins 🪙🪙🪙🪙~~ beans 🫘🫘🫘🫘

[-] COOLSJ@lemmy.world 13 points 1 year ago

I can't imagine how hard it is to rua server that has influx of large users. I Thank you for your hard work to run and maintain this instance. Hope it works out well and future will be a smooth sail.

[-] Eczpurt@lemmy.world 13 points 1 year ago

Really appreciate all the time and effort you all put in especially while Lemmy is growing so fast. Couldn't happen without you!

[-] SnowFoxx@lemmy.world 13 points 1 year ago

Thank you so much for your hard work and for fixing everything tirelessly, so that we can waste some time with posting beans and stuff lol.

Seriously, you're doing a great job <3

load more comments (1 replies)
[-] TrueStoryBob@lemmy.world 12 points 1 year ago* (last edited 1 year ago)

To all the folks that are worried, don't be. Let me tell you, Mastodon was a wreck when Musk took over Twitter and that all got sorted out within a week or so. The mods and sysadmin are obviously working hard to get things up and running, but growing pains are growing pains. To paraphrase an old adage: "Facebook wasn't built in a day." In the beginning, Zuck and Co literally limited signups to only people with college email accounts and only added universities a few domains at a time... scaling is very difficult, but it's not impossible. The way things are going, Lemmy is going to thrive!

[-] thatguy_ie@lemmy.world 11 points 1 year ago

Thank you for the transparency @ruud@lemmy.world It is rare for platforms to scale this quickly so issues like this are inevitable. Good luck for the troubleshooting!

[-] sykccc@lemmy.world 11 points 1 year ago

Y’all doing amazing things keeping us going 🔥

[-] ben914@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

Thanks for the transparency, and communication. I think it's always better when the userbase is able to understand what is going on rather than being left in the dark wondering what is going wrong. Keep up the good work, but also be sure you guys get enough rest, and take care of yourselves too.

[-] scaredoftrumpwinning@lemmy.world 10 points 1 year ago

Thanks for keeping us apprised. Hopefully you will find the resource leak.

[-] mtnwolf@lemmy.world 9 points 1 year ago

I totally appreciate all of your efforts! Thank you for being a pioneer.

[-] brewery@lemmy.world 8 points 1 year ago

Amazing work team. I am already seeing improvements. Hope you are not killing yourselves though, I'm sure everyone realises how difficult it is and that it will take time to fix. We're here for the long haul! Thanks again

[-] WubbyGeth@lemmy.world 8 points 1 year ago

Do you need any spare server hardware? @ruud@lemmy.world I would be happy to donate some!

[-] ruud@lemmy.world 10 points 1 year ago

No thanks, it's all at Hetzner, and thanks to all donations we can extend when needed!

[-] Aimhere@lemmy.world 8 points 1 year ago

Thank you for everything you do. With any luck, you'll get all the support you need, and have it running smooth as silk.

load more comments
view more: next ›
this post was submitted on 04 Jul 2023
699 points (98.1% liked)

Lemmy.World Announcements

28383 readers
303 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages 🔥

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations 💗

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS