142
submitted 1 year ago by Tehhund@lemmy.world to c/asklemmy@lemmy.ml

Does ActivityPub send those to other instances, or does ActivityPub only send the original post and the rest (upvotes, downvotes, replies) are stored only on the original server where the post was made?

you are viewing a single comment's thread
view the rest of the comments
[-] Dave@lemmy.nz 10 points 1 year ago* (last edited 1 year ago)

My instance has 800 users, is 4 months old, and the database only is over 30GB. It is an insane amount of data.

[-] Scrollone@feddit.it 3 points 1 year ago

How much RAM does your server have to handle a 30 GB database?

[-] Dave@lemmy.nz 2 points 1 year ago* (last edited 1 year ago)

I'm a bad example. I haven't properly tuned the settings, currently RAM will grow to whatever is available.

I'm very lucky, the instance is running in a proxmox container alongside some other fediverse servers (run by others), on dedicated hardware in a datacentre. The sysadmin has basically thrown me plenty of spare resources since the other containers aren't using them and RAM not used is wasted, so I've got 32GB allocated currently. I still need to restart once a week or that RAM gets used up and the database container crashes.

It's been on my list of things to do for a while, try some different postgres configs, but I just haven't got around to it.

I know a couple of months back lemmy.world were restarting every 30 mins so they didn't use up all the RAM and crash. I presume some time and some lemmy updates later that's no longer the case.

I know some smaller servers get away with 2gb of RAM, and we should be able to use a lot less than 32GB if I actually try to tune the postgres config.

[-] nutomic@lemmy.ml 2 points 1 year ago

There is a postgres command to show the size of each table. Most likely it is from activity tables which can be cleared out to save space.

[-] Dave@lemmy.nz 1 points 1 year ago

After the second-to-last update the database shrunk and I was under the impression there was some automatic removal happening. Was this not the case?

It's helpful info for others but personally I'm not that worried about the database size. The size of the pictrs cache is much more of a concern, and as I understand it there isn't an easy way to identify and remove cache images without accidentally taking out user image uploads.

[-] nutomic@lemmy.ml 1 points 1 year ago

Yes there is automatic removal so if you have enough disk space, no need to worry about it.

The pictrs storage only consists of uploads from local users, and thumbnails for both local and remote posts. Thumbnails for remote posts could theoretically be wiped and loaded from the other instance, but they shouldnt take much space anyway.

[-] Dave@lemmy.nz 1 points 1 year ago

Yes there is automatic removal so if you have enough disk space, no need to worry about it.

What triggers this? My DB was about 30GB, then the update shrunk it down to 5GB, then it grew back to 30GB.

The pictrs storage only consists of uploads from local users, and thumbnails for both local and remote posts. Thumbnails for remote posts could theoretically be wiped and loaded from the other instance, but they shouldnt take much space anyway.

I'd be pretty confident that the 140GB of pictrs cache I have is mostly cache. There are occasionaly users uploading images, but we don't have that many active users, I'd be surprised if there was more than a few GB of image uploads in total out of that 140GB. We just aren't that big of a server.

The pictrs volume also grows consistently at a little under 1GB per day. I just went and had a look, in the files directory there are 6 directories from today (the day only has a couple of hours left), and these sum to almost 700MB of images and almost 6000 files, or a little over 100KB each.

The instance has had just 27 active users today (though of course users not posting will still generate thumbnails).

While the cached images may be small, it adds up really quick.

As far as I can tell there is no cache pruning, as the cache goes up pretty consistently each day.

[-] nutomic@lemmy.ml 1 points 1 year ago

The activities table is cleared out automatically every week, items older than 3 months are deleted. During the update only a smaller number of rows was migrated so the db temporarily was slower. You can manually clear older items in sent_activity and received_activity to free more space.

Actually Im wrong about images, turns out that all remote images are mirrored locally in order to generate thumbnails. 0.19 will have an option to disable that. This could use more improvements, the whole image handling is rather confusing now.

[-] Dave@lemmy.nz 1 points 1 year ago

Thanks for the info! Ior performance reasons it would be nice to have a way to configure how long the cache is kept rather than disable it completely, but I understand you probably have other priorities.

Would disabling the cache remove images cached up to that point?

[-] nutomic@lemmy.ml 1 points 1 year ago

You will have to wait for 0.19 to disable it. Pictrs 0.5 will also add a way to clear old images. See the issue: https://github.com/LemmyNet/lemmy/issues/4053

[-] Dave@lemmy.nz 1 points 1 year ago

That sounds great, thanks for letting me know.

[-] ViciousTangerine 1 points 1 year ago

Sounds like this will be a serious problem for scaling Lemmy if more users start to adopt it

[-] Dave@lemmy.nz 2 points 1 year ago

Lemmy already has serious scaling issues. It's priority one for the devs at the moment. The next release has major backend changes.

Lemmy is still version 0, it's basically not released yet, so we have to give them some slack. They weren't exactly expecting to go from less than 1000 monthly users to tens of thousands almost overnight, on a platform where development was still early days.

this post was submitted on 15 Oct 2023
142 points (97.3% liked)

Asklemmy

43777 readers
899 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS