this post was submitted on 08 Jan 2025
562 points (94.3% liked)

Selfhosted

49269 readers
733 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 
(page 3) 50 comments
sorted by: hot top controversial new old
[–] FrederikNJS@lemm.ee 2 points 6 months ago* (last edited 6 months ago)

My home Kubernetes cluster started out on a Core i7-920 with 8 GB of memory.

Upgraded to 16 GB memory

Upgraded to a Core i5-2400S

Upgraded to a Core i7-3770

Upgraded to 32 GB memory

Recently Upgraded to a Core i5-7600K

I think I'll stay with that for rather long...

I did however add 2 Intel NUCs (gen 6 and gen 8) to the cluster to have a distributed control plane and some distributed storage.

[–] ebc@lemmy.ca 2 points 6 months ago (1 children)

Running a bunch of services here on a i3 PC I built for my wife back in 2010. I've since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.

It's pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn't matter as much as it used to.

I'd like to get a good backup solution going for it so I can actually use it for important data, but realistically I'm probably just going to replace it with a NAS at some point.

[–] N0x0n@lemmy.ml 2 points 6 months ago* (last edited 6 months ago)

A NAS is just a small desktop computer. If you have a motherboard/CPU/ram/Ethernet/case and a lot of SSDs/HDDs you are good to go.

Just don't bother to buy something marketed as NAS. It's expensive and less modular than any desktop PC.

Just my opinion.

[–] GnuLinuxDude@lemmy.ml 2 points 6 months ago (2 children)

It's not absolutely shit, it's a Thinkpad t440s with an i7 and 8gigs of RAM and a completely broken trackpad that I ordered to use as a PC when my desktop wasn't working in 2018. Started with a bare server OS then quickly realized the value of virtualization and deployed Proxmox on it in 2019. Have been using it as a modest little server ever since. But I realize it's now 10 years old. And it might be my server for another 5 years, or more if it can manage it.

In the host OS I tweaked some value to ensure the battery never charges over 80%. And while I don't know exactly how much electricity it consumes on idle, I believe it's not too much. Works great for what I want. The most significant issue is some error message that I can't remember the text of that would pop up, I think related to the NIC. I guess Linux and the NIC in this laptop have/had some kind of mutual misunderstanding.

[–] ripcord@lemmy.world 1 points 6 months ago (1 children)

Yeah, absolutely. Same here, I find used laptops often make GREAT homelab systems, and ones with broken screens/mice/keyboards can be even better since you can get them CHEAP and still fully use them.

I have 4 doing various things including one acting as my "desktop" down in the homelab. But they're between 4 and 14 years old and do a great job for what they're used for.

load more comments (1 replies)
load more comments (1 replies)
[–] evidences@lemmy.world 2 points 6 months ago

My NAS is on an embedded Xeon that at this point is close to a decade old and one of my proxmox boxes is on an Intel 6500t. I'm not really running anything on any really low spec machines anymore, though earlyish in the pandemic I was running boinc with the Open Pandemics project on 4 raspberry pis.

[–] pat277@sh.itjust.works 2 points 5 months ago

Fuck ive been dealing with that + max RAM speed limitations for a month.

[–] andrew_bidlaw@sh.itjust.works 2 points 6 months ago

I faced that only with different editions of Windows limiting it by itself.

[–] shadowtofu@discuss.tchncs.de 2 points 6 months ago

I met someone that was throwing out old memory modules. Literally boxes full of DDR, DDR2 modules. I got quite excited, hoping to upgrade my server’s memory. Yeah, DDR2 only goes up to 2GiB. So I am stuck with 2×2GiB. But I am only using 85% of that anyways, so it’s fine.

[–] ordellrb@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

kind of.. a "AMD GX-420GI SOC: quad-core APU" the one with no L3 Cache, in an Thin Client and 8Gb Ram. old Laptop ssd for Storage (128GB) Nextcloud is usable but not fast.

edit: the Best thing: its 100% Fanless

[–] GaMEChld@lemmy.world 1 points 6 months ago (1 children)

Plex server is running on my old Threadripper 1950X. Thing has been a champ. Due to rebuild it since I've got newer hardware to cycle into it but been dragging my heels on it. Not looking forward to it.

[–] potustheplant@feddit.nl 1 points 6 months ago (1 children)

Isn't ryzen not recommended for transcoding? Plus, I've read that power efficiency isn't great. Mostly regarding idle power consumption.

[–] TMP_NKcYUEoM7kXg4qYe@lemmy.world 1 points 6 months ago (1 children)

Ryzen is not recommended for transcoding because the Radeon integrated GPU's encoding accelerator is not as fast as in intel iGPUs. But this does not come into play if you A) have 16 cores and B) don't even have an integrated GPU.

And about idle power consumption: I don't think it's a point of interest if you are using a workstation class computer.

[–] potustheplant@feddit.nl 1 points 6 months ago (1 children)

I think it's a point of a interest for any hw running 24/7 but you do you.

Regarding transcoding, are you saying you're not even doing it? If you are, doing it with your cpu is far more inefficient than using a gpu. But again, different strokes I guess.

[–] TMP_NKcYUEoM7kXg4qYe@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

Dunno whether they are transcoding or not nor why they have such a bizarre setup. But I would hope 16C/32T CPU from 2017 could handle software transcoding. Also peak power consumption while playing a movie does not really matter compared to idle power consumption. What matters more is that the motherboard is probably packed with pcie slots that consume a lot of power. But to OP it probably does not matter if they use a threadripper.

[–] potustheplant@feddit.nl 1 points 6 months ago

I would hope 16C/32T CPU from 2017 could handle software transcoding

I didn't say it couldn't handle it. Just that it was very inefficient.

peak power consumption while playing a movie does not really matter compared to idle power consumption

I mentioned both things. Did you actually read my comments?

[–] blackstrat@lemmy.fwgx.uk 1 points 6 months ago

I moved from a Drll R710 with dual docket Xeons to a rack mount desktop case with a single Ryzen R5 5600G. I doubled the performance and halved the power consumption in one go. I do miss having idrac though. I need a KVM over IP solution but haven't stomached the cost yet. For how often I need it it's not an issue.

[–] Deway@lemmy.world 1 points 6 months ago (1 children)

My first @home server was an old defective iMac G3 but it did the job (and then died for good) A while back, I got a RP3 and then a small thin client with some small AMD CPU. They (barely) got the job done.

I replaced them with an HP EliteDesk G2 micro with a i5-6500T. I don't know what to do with the extra power.

[–] qaz@lemmy.world 1 points 6 months ago (4 children)

What are you running on it?

load more comments (4 replies)
[–] 31337@sh.itjust.works 1 points 6 months ago

Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it's not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.

[–] sugar_in_your_tea@sh.itjust.works 1 points 6 months ago* (last edited 6 months ago)

Wow, it's been a long time since I had hardware that awful.

My old NAS was a Phenom II x4 from 2009, and I only retired it a year and a half ago when I upgraded my PC. But I put 8GB RAM into that since it was a 64-bit processor (could've put up to 32GB I think, since it had 4 DDR3 slots). My NAS currently runs a Ryzen 1700, but I still have that old Phenom in the closet in case that Ryzen dies, but I prefer the newer HW because it's lower power.

That said, I once built a web server on an Arduino which also supported websockets (max 4 connections). That was more of a POC than anything though.

[–] gortbrown@lemmy.sdf.org 1 points 6 months ago

I used to self host some stuff on an old 2011 iMac. Worked fine, actually

I'm hosting a minio cluster on my brother-in-law's old gaming computer he spent $5k on in 2012 and 3 five year old mini-pcs with 1tb external drives plugged into them. Works fine.

[–] SolaceFiend@lemmy.world 1 points 6 months ago (1 children)

I'm still interested in Self-Hosting but I actually tried getting into self-hosting a year or so ago. I bought a s***** desktop computer from Walmart, and installed window server 2020 on it to try to practice on that.

Thought I could use it to put some bullet points on my resume, and maybe get into self hosting later with next cloud. I ended up not fully following through because I felt like I needed to first buy new editions of the server administration and network infrastructure textbooks I had learned from a decade prior, before I could continue with giving it an FQDN, setting it up as a primary DNS Server, or pointing it at one, and etc.

So it was only accessible on my LAN, because I was afraid of making it a remotely accessible server unless I knew I had good firewall rules, and had set up the primary DNS server correctly, and ultimately just never finished setting it up. The most ever accomplished was getting it working as a file server for personal storage, and creating local accounts with usernames and passwords for both myself and my mom, whom I was living with at the time. It could authenticate remote access through our local Wi-Fi, but I never got further.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›