CPU seems low from what I have read other places and if writes matter a few used Enterprise SSDs per chassis as DB/Wall for the HDDs would be nice.
Sporkers
joined 11 months ago
CPU seems low from what I have read other places and if writes matter a few used Enterprise SSDs per chassis as DB/Wall for the HDDs would be nice.
I don't really have a lot of experience with this, I just read a ton and built a modest 5 node homelab cluster and 5 nodes seemed to be the the minimum count you want to be at. The recommendation for Ceph now are so vague, all the documentation has changed in recent years to talk about IOPS/core but is very vague about it. So it depends on how much performance you really expect out of it, higher expectation give it more cores. NVME devices for sure scale with more cores from 2 to 4 cores shows 100% iops scaling in Ceph Docs and keep scaling decently past there if isolating a single OSD for performance testing with enterprise NVME drives.
But you are using HDDs, in a homelab and on a budget. I think your 4 cores would be the extreme low budget, the not expecting performance option for that many OSDs, and 8 cores would be the more regular budget option minimum and do 12-16 if I had heavier use/performance goals and then have more than 64GB RAM per node especially if the monitors are co-located. And next level would be to add maybe 4-8 used enterprise class NVME drives per node and spread the DB/Wall for the OSDs across those NVME drives and more cores to handle them.