32
submitted 8 months ago* (last edited 8 months ago) by ch00f@lemmy.world to c/selfhost@lemmy.ml

I've been running a headless Ubuntu server for about 10 years or so. At first, it was just a file/print server, so I bought a super low power motherboard/processor to cut down on the energy bill. It's a passively cooled Intel Celeron J3455 "maxed out" with 16BG of RAM.

Since then it's ballooned into a Plex/Shinobi/Photoprism/Samba/Frigate/MQTT/Matrix/Piwigo monster. It has six drives in RAID6 and a 7th for system storage (three of the drives are through a PCI card). I'm planning on moving my server closet, and I'll be upgrading the case into a rack-mount style case. While I'm at it, I figured I could upgrade the hardware as well. I was curious what I should look for in hardware.

I've built a number of gaming PCs in the past, but I've never looked at server hardware. What features should I look for? Also, is there anything specific (besides a general purpose video card) that I can buy to speed up video encoding? It'd be nice to be able to real-time transcode video with Plex.

top 20 comments
sorted by: hot top controversial new old
[-] Trainguyrom@reddthat.com 7 points 8 months ago

Video encoding you've really got 2 clear options: Either a 8th gen or newer consumer Intel chip with integrated graphics for QuickSync support or toss a GPU in there. You can also rely on raw CPU cycles for video transcode but that's wildly energy inefficient in comparison.

I've heard good things about how anything AM4 compares to x99 era Intel on both raw performance and performance per watt, but I have no personal anecdata to share.

Personally I'm currently eyeing up a gaming computer refresh as the opportunity to refresh my primary server with the old components from the gaming computer, but I'm also starting with literal ewaste I scrounged for free, so pretty much anything is big upgrade.

[-] ch00f@lemmy.world 4 points 8 months ago

So my current processor has QuickSync. Are there generations of quicksync? Would a newer implementation be faster? There's not a lot of data out there. It seems like QS support is either yes or no.

[-] aBundleOfFerrets@sh.itjust.works 2 points 8 months ago* (last edited 8 months ago)

QS is generational, and newer versions will be much better in quality than older ones, and have some more throughput too.

Important note: ARC gpus all have the same qs engine right now (A780 = A310), so even an arc a310 will decimate any cpu qs and will be much faster than any nvidia hardware encoder too. (qs encoder in a310 is slightly handicapped by lower vram bandwidth and size, but it is negligible)

[-] stankmut@lemmy.world 1 points 8 months ago

Newer generations have decoders/encoders for more codecs. 8th gen Intel Core cpus have good HEVC support while you need the more recent gens for good AV1 support.

[-] Trainguyrom@reddthat.com 1 points 8 months ago

I'm not entirely certain. QuickSync is an Intel GPU feature and generally just listed as Yes/No on ark.intel.com so I'm inclined to suspect it doesn't have significant change from one generation to another. Most GPUs have a limited number of of video streams they can transcode at a time, so if you're exceeding that number then I believe it will have to brute force it on the processor which will be anemic on an older Celeron. Have you verified that Plex is actually using QuickSync to transcode? If its been hitting the processor this whole time that would easily do that.

[-] ch00f@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

Not sure what Plex is using, but Shinobi and Photoprism do.

Plex usually runs at native resolution, but it can just barely run if it has to downscale or bake in subtitles in real time. I'll have to check the settings to see what it's using.

Edit: Ah, looks like you need to pay for Plex Pass to enable Quick Sync.

[-] Trainguyrom@reddthat.com 2 points 8 months ago

That would certainly do it! Stating the obvious, here, looks like you have 3 clear paths to take:

  1. Purchase Plex Pass to enable hardware transcode
  2. Switch to Jellyfin to avoid paying for Plex Pass
  3. Upgrade the server to something with more CPU horsepower (and of course higher energy consumption) to compensate for lack of hardware transcode support
[-] Lettuceeatlettuce@lemmy.ml 4 points 8 months ago

If you're on a budget, check out X99 socket Xeons. You can pick up Mobos and chips for super cheap. 10+ core hyper-threaded Xeons with solid clocks and a motherboard for 120-180 bucks total. Support 64 GB of RAM, more if you have a proper server board.

For transcoding, depending on the codec, dedicated GPU is best.

I'm not sure about Plex, but I know on Jellyfin, the new Intel Arc GPUs are really great for encoding, not too expensive for the lower end cards either, and low profile options for smaller rack cases.

[-] ch00f@lemmy.world 4 points 8 months ago

Thanks for the tips!

To clarify, by "x99," do you mean LGA2011-3? That's the socket wikipedia associates with the hardware.

And as for Arc, it looks like they're a great option for video encoding. I'm actually using QuickSync already on my Celeron processor which has helped. From what I can understand, it looks like QuickSync is basically the same processor on all of the Arc cards, so I can just go with the cheapest card if I don't plan to use much of the other features? Looking like an A380 can be had for $100 or so.

[-] Lettuceeatlettuce@lemmy.ml 2 points 8 months ago

Sorry for the slow reply. Yes, I mixed the chipset up with the socket lol.

The A380 is the same I've been looking at for my own home Media setup, should be plenty of encoding power for your use case.

Good luck!

[-] jo3shmoo@sh.itjust.works 3 points 8 months ago

Great advice from everyone here. For the transcoding side of things you want an 8th gen or newer Intel chip to handle quicksync and have a good level of quality. I've been using a 10th gen i5 for a couple of years now and it's been great. Regularly handles multiple transcodes and has enough cores to do all the other server stuff without an issue. You need Plex Pass to do the hardware transcodes if you don't already have it or can look at switching to Jellyfin.

As mentioned elsewhere, using an HBA is great when you start getting to large numbers of drives. I haven't seen random drops the way I've seen occasionally on the cheap SATA PCI cards. If you get one that's flashed in "IT mode" the drives appear normally to your OS and you can then build software raid however you want. If you don't want to flash it yourself, I've had good luck with stuff from The Art of Server

I know some people like to use old "real" server hardware for reliability or ECC memory but I've personally had good luck with quality consumer hardware and keeping everything running on a UPS. I've learned a lot from serverbuilds.net about compatibility works between some of the consumer gear, and making sense of some of the used enterprise gear that's useful for this hobby. They also have good info on trying to do "budget" build outs.

Most of the drives in my rack have been running for years and were shucked from external drives to save money. I think the key to success here has been keeping them cool and under consistent UPS power. Some of mine are in a disk shelf, and some are in the Rosewill case with the 12 hot swap bays. Drives are sitting at 24-28 degrees Celsius.

Moving to the rack is a slippery slope... You start with one rack mounted server, and soon you're adding a disk shelf and setting up 10 gigabit networking between devices. Give yourself more drive bays than you need now if you can so you have expansion space and not have to completely rearrange the rack 3 years later.

Also if your budget can swing it, it's nice keeping other older hardware around for testing. I leave my "critical" stuff running on one server now so that a reboot when tinkering doesn't take down all the stuff running the house. That one only gets rebooted or has major changes made when it's not in use (and wife isn't watching Plex). The stuff that doesn't quite need to be 24/7 gets tested on the other server that is safe to reboot.

[-] BigDaddySlim@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

I see a lot of drives there, all presumably connected via SATA. If you're looking to add more drives in the future I recommend a SAS card or two, specifically a Dell PERC H310 flashed in IT mode. I picked one up on eBay for $20 a while back and it gives me 8 drive connectivity. Also snag some mini SAS to SATA cables to connect the drives.

I've got 44TB running in my Plex server using it and have had 0 issues with the card. Even had a friend 3D print a fan housing and attached a small Noctua fan to the heatsink for some peace of mind when doing large data transfers to make sure the card doesn't overheat.

Edit: Like so

[-] ch00f@lemmy.world 1 points 8 months ago

That's interesting. I'm running a software raid since I've been warned of dying raid controllers making your data irretrievable unless you buy an exact replacement. I guess the enterprise folks have that figured out.

Having a little trouble finding details online. Do those two cables going off to the right split off into a bunch of SATA connections?

[-] Toribor@corndog.social 1 points 8 months ago

I use ZFS for this exact reason. I didn't want to be stuck using a specific controller or have problems if I needed to migrate my storage to another server. It's a lot more flexible than a hardware RAID too and has some nice benefits like snapshotting.

[-] randomaside@lemmy.dbzer0.com 2 points 8 months ago

I think about this a lot and it really does depend on your needs.

Home lab vs home server. I like to keep them separate just because I consider my lab unstable and my home server stable. You don't have to do it this way it's just the way I like it.

https://a.co/d/6k6QpOD If you want to build a low power NAS I suggest investing in an Intel n100 based itx Nas motherboard. You can then use a case like this from Jonsbo https://a.co/d/1ayqwJV. This could be a nice cool and quiet solution. If you want to do video transcoding, the n100 has quicksync on board and with something like Truenas it's pretty easy to set up via the app catalog (check out truecharts).

If you want something even more simple (good for home users or like a backup target you keep elsewhere) I've been meaning to grab one of these "Topton 2-Bay NAS R1 PRO 12th Gen Intel N100 Network Attached Storage Media Server" from AliExpress for just this case.

As for a Lab, I suggest finding a W680 chipset based motherboard like an ASRock IMB-X1314 LGA 1700 Intel W680. You can get a cpu like a 12400 or 12500T (lower power and less heat) used cheap and you have the option to upgrade and use ecc memory without a XEON. You also have a lot more pci express connectivity.

What ever you do choose, anything pre 12th gen Intel is basically ewaste (those 11th gen mobile erying I9 engineering samples are very good but less reliable than desired). Do not invest in any old x99 based gear (unless you get it for free). I have an old dual XEON system that is still running and it uses power like a small fridge.

[-] smb@lemmy.ml 2 points 8 months ago

my 2 cents just in case...:

A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no "merging") i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a "spare" server using ansible and the backup, test everything and you can be sure you also have "documented" your setup to a good degree.

for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu's off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu's on again + set higher frequency is quick and can be easily scripted.

hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely "only" cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the "other" could then maybe be a magnetic one set to write-mostly.

as i do not buy "server" hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit... instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while... i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

you likely want smart monitoring and once in a while run memtest.

for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

also after updating packages, have a look at lsof | egrep "DEL|deleted" to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

ok this is more than 2 cents, maybe 5. never mind

hope these ideas help a bit

[-] ch00f@lemmy.world 1 points 8 months ago

Yeah, I have an offline backup I do every year in a fireproof safe in my basement. Might open a safe deposit box at some point, but I feel reasonably safe.

Good call on power efficiency. I'll have to keep that in mind. I think I'm currently drawing around 100W which is mostly the hard drives (the CPU doesn't even need a fan). I assume that might go up a bit in a new build, but I think the benefits will be worth it.

[-] LukyJay@lemmy.world 1 points 6 months ago

Get rid of that molex to sata adaptor, they catch fire. Molex to sata = lose your data

[-] 7heo@lemmy.ml 1 points 8 months ago
this post was submitted on 05 Mar 2024
32 points (97.1% liked)

Self Hosted - Self-hosting your services.

11419 readers
2 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 3 years ago
MODERATORS