this post was submitted on 24 Sep 2025
152 points (95.2% liked)

Selfhosted

51881 readers
755 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

top 50 comments
sorted by: hot top controversial new old
[–] jet@hackertalks.com 1 points 1 day ago

KISS

The more complicated the machine the more chances for failure.

Remote management plus bare metal just works, it's very simple, and you get the maximum out of the hardware.

Depending on your use case that could be very important

[–] ZiemekZ@lemmy.world 13 points 5 days ago (5 children)

I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn't it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don't think there's anything that prohibits them from running on the same bare metal, actually I think they'd both run as well as in Docker (if not better because of lack of overhead)!

load more comments (5 replies)
[–] erock@lemmy.ml 2 points 4 days ago

Here’s my homelab journey: https://bower.sh/homelab

Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

[–] zod000@lemmy.dbzer0.com 18 points 6 days ago* (last edited 6 days ago) (3 children)

Why would I want add overheard and complexity to my system when I don't need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don't see a benefit to doing so at home.

load more comments (3 replies)
[–] fubarx@lemmy.world 18 points 6 days ago

Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

The only constant is change.

[–] splendoruranium@infosec.pub 14 points 6 days ago

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

If it aint broke, don't fix it 🤷

[–] sem@lemmy.blahaj.zone 10 points 6 days ago (1 children)

For me the learning curve of learning containers does not match the value proposition of what benefits they're supposed to provide.

[–] billwashere@lemmy.world 11 points 6 days ago (1 children)

I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.

Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.

[–] dogs0n@sh.itjust.works 5 points 5 days ago

100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.

Setting up a new service is mostly 0% risk and apps can't bog down my main file system with random log files, configs, etc that feel impossible to completely remove.

I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and docker compose up everything and I am exactly where I left off at my last backup point.

[–] billwashere@lemmy.world 9 points 6 days ago* (last edited 4 days ago)

Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

Just having all these things sandboxed makes it SO much easier.

[–] sj_zero@lotide.fbxl.net 6 points 5 days ago (1 children)

I'm using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

VMs were never really an option for me because the overhead is too high for the low power machines I use -- my entire empire of dirt doesn't have any fans, it's all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

Stuff like docker I didn't like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn't intended for tinkering or anything. You aren't supposed to build from source in docker as far as I can tell.

The nice thing about proxmox's lxc implementation is I can hop in and change things or fix things as I desire. It's all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

[–] boonhet@sopuli.xyz 4 points 5 days ago

Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.

[–] missfrizzle@discuss.tchncs.de 14 points 6 days ago (1 children)

pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

and even that's overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

/uj not really but that'd be sick as hell.

load more comments (1 replies)
[–] nuggie_ss 4 points 5 days ago

Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

[–] Smokeydope@lemmy.world 6 points 6 days ago* (last edited 6 days ago) (1 children)

Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don't care much for building from scratch and navigating dependency roulette.

[–] kiol@lemmy.world 3 points 6 days ago (1 children)
[–] Smokeydope@lemmy.world 3 points 6 days ago (1 children)
[–] BrianTheFirst@lemmy.world 4 points 5 days ago (3 children)

I guess it isn't the most user friendly process, but you can add the official Docker repo and get an up-to-date version without compiling or anything. You just want to make sure to uninstall any Docker packages that you installed before, before you start.

https://linuxiac.com/how-to-install-docker-on-linux-mint-22/

load more comments (3 replies)
[–] HiTekRedNek@lemmy.world 7 points 6 days ago (1 children)

In my own experience, certain things should always be on their own dedicated machines.

My primary router/firewall is on bare metal for this very reason.

I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

I didn't see a point in removing it. So it's there, just not automatically started.

[–] AA5B@lemmy.world 5 points 6 days ago* (last edited 6 days ago) (1 children)

Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

[–] Damage@feddit.it 3 points 6 days ago (1 children)

My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.

If you're talking about backups and updates for addons and core, that works on VMs as well.

[–] AA5B@lemmy.world 4 points 6 days ago

For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.

I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option

[–] DarkMetatron@feddit.org 3 points 5 days ago

My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short...

[–] yessikg@fedia.io 5 points 6 days ago

It's so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

[–] OnfireNFS@lemmy.world 2 points 5 days ago

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer

Probably doesn't work for everyone but it works for me

[–] Evotech@lemmy.world 4 points 6 days ago

It's just another system to maintain, another link in the chain that can fail.

I run all my services on my personal gaming pc.

[–] medem@lemmy.wtf 5 points 6 days ago

The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

[–] kossa@feddit.org 2 points 5 days ago

Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

[–] pedro@lemmy.dbzer0.com 4 points 6 days ago* (last edited 6 days ago) (3 children)

I've not cracked the docker nut yet. I don't get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven't figured out these two things yet

An easy option is to add the data folders for the container you are using as a volume mapped to a local folder. Then the container will just put the files there and you can backup the folder. Restore is just put the files back there, then make sure you set the same volume mapping so the container already sees them.

You can also use the same method to access the db directory for the migration. Typically for databases you want to make sure the container is stopped before doing anything with those files.

[–] Passerby6497@lemmy.world 3 points 6 days ago* (last edited 6 days ago)

All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you're gold.

Look into docker compose and volumes to get an idea of where to start.

load more comments (1 replies)
[–] lka1988@lemmy.dbzer0.com 4 points 6 days ago (3 children)

I run my NAS and Home Assistant on bare metal.

  • NAS: OMV on a Mac mini with a separate drive case
  • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

load more comments (3 replies)
load more comments
view more: next ›