this post was submitted on 19 Aug 2025
39 points (97.6% liked)

Selfhosted

51636 readers
994 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hi everyone, I've been working on my homelab for a year and a half now, and I've tested several approaches to managing NAS and selfhosted applications. My current setup is an old desktop computer that boots into Proxmox, which has two VMs:

  • TrueNAS Scale: manages storage, shares and replication.
  • Debian 12 w/ docker: for all of my selfhosted applications.

The applications connect to the TrueNAS' storage via NFS. I have two identical HDDs as a mirror, another one that has no failsafe (but it's fine, because the data it contains is non-critical), and an external HDD that I want to use for replication, or some other use I still haven't decided.

Now, the issue is the following. I've noticed that TrueNAS complains that the HDDs are Unhealthy and has complained about checksum errors. It also turns out that it can't run S.M.A.R.T. checks, because instead of using an HBA, I'm directly passing the entire HDDs by ID to the VM. I've read recently that it's discouraged to pass virtualized disks to TrueNAS, as data corruption can occur. And lately I was having trouble with a selfhosted instance of gitea, where data (apparently) got corrupted, and git was throwing errors when you tried to fetch or pull. I don't know if this is related or not.

Now the thing is, I have a very limited budget, so I'm not keen on buying a dedicated HBA just out of a hunch. Is it really needed?

I mean, I know I could run TrueNAS directly, instead of using Proxmox, but I've found TrueNAS to be a pretty crappy Hypervisor (IMHO) in the past.

My main goal is to be able to manage the data that is used in selfhosted applications separately. For example, I want to be able to access Nextcloud's files, even if the docker instance is broken. But maybe this is just an irrational fear, and I should instead backup the entire docker instances and hope for the best, or maybe I'm just misunderstanding how this works.

In any case, I have some data that I want to store and want to reliably archive, and I don't want the docker apps to have too much control over it. That's why I went with the current approach. It has also allowed for very granular control. But it's also a bit more cumbersome, as everytime I want to selfhost a new app, I need to configure datasets, permissions and mounting of NFS shares.

Is there a simpler approach to all this? Or should I just buy an HBA and continue with things as they are? If so, which one should I buy (considering a very limited budget)?

I'm thankful for any advice you can give and for your time. Have a nice day!

you are viewing a single comment's thread
view the rest of the comments
[–] MangoPenguin@lemmy.blahaj.zone 6 points 1 month ago (2 children)

Proxmox supports ZFS natively with management in the WebUI. So you could get rid of TrueNAS entirely and not need to deal with HBA pass-through or anything.

You also wouldn't need NFS or have to deal with shares, as the data is available directly to Proxmox Containers via bind mounts.

[–] thelemonalex@lemmy.world 1 points 1 month ago (1 children)

Okay, if Proxmox can handle all that, I'll be glad to ditch TrueNAS. However, I'm afraid that I won't know how to migrate. I've found this reddit thread about someone who tried to do the same thing (I think) and accidentally corrupted their pools. About skipping NFS shares, that would be a big improvement for me, but I'm very unfamiliar with bind mounts. If I understand correctly, you can specify directories that live on the Proxmox Host, and they appear inside the VM, right? How does this compare to using virtual storage? Also, how can I replicate the ZFS pools to an external machine? In any case, thank you for that info!

[–] MangoPenguin@lemmy.blahaj.zone 3 points 1 month ago* (last edited 1 month ago) (2 children)

Migration should be as simple as importing the existing ZFS stuff into the Proxmox OS. Having backups of important data is critical though.

If I understand correctly, you can specify directories that live on the Proxmox Host, and they appear inside the VM, right?

Inside a Container, yep. VMs can't do bind mounts, and would need to use NFS to share existing data from the host to inside the VM.

How does this compare to using virtual storage?

Like a VM virtual disk? Those are exclusive to each VM and can't be shared, so if you want multiple VMs to access the same data then NFS would be needed.

But containers with bind mounts don't have that limitation and multiple containers can access the same data (such as media).

Also, how can I replicate the ZFS pools to an external machine?

ZFS replication would do that.

[–] grue@lemmy.world 2 points 1 month ago (1 children)

Like a VM virtual disk? Those are exclusive to each VM and can’t be shared, so if you want multiple VMs to access the same data then NFS would be needed.

But containers with bind mounts don’t have that limitation and multiple containers can access the same data (such as media).

Just to be clear, are you saying that when you’re using bind-mounted ZFS pools, it's okay to write from two containers (or both the proxmox host and a container) at the same time?

Also, I think I managed to accomplish that for a VM by creating a Proxmox Directory pointing to a path in a zpool, adding it to the VM using virtiofs, and mounting it within the VM. I'm not sure if writes from both the VM and the host are safe in that case either, though.

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 month ago

You wouldn't want to write to the same file at the same time, but otherwise it works fine.

[–] thelemonalex@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

Okay, I have one of the pools that is pretty empty and has non-critical data, and I think I'll try migrating that first, and see if it's imported correctly by Proxmox.

About Containers, I think I'll have to do some more research because I think I haven't fully understood yet how they compare to VMs. Or like, when I should use the one over the other. I guess I could have a Container with a bind mount to a dataset that I want to be able to share over NFS or SMB, and handle that from whatever OS I put in the Container, right? But, I could also have a VM do that, and though it wouldn't be able to share the data with other VMs, it can do it over NFS, can't it? What are the advantages of doing one thing over the other?

Well, in any case, thank you for your patience, for going over each detail and taking the time to correct me where I'm wrong. I'm learning a lot, so thank you!

Edit: fixing grammar

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 month ago (1 children)

Essentially a container shares the kernel of the host, so uses less resources to run.

VMs are useful when you need more isolation or a different kernel (or need to add kernel modules).

For most purposes containers are the easy option.

I guess I could have a Container with a bind mount to a dataset that I want to be able to share over NFS or SMB, and handle that from whatever OS I put in the Container, right?

Yep!

But, I could also have a VM do that, and though it wouldn’t be able to share the data with other VMs, it can do it over NFS, can’t it?

Also yes, just a more complex setup with more performance penalty due to using NFS to share data into the VM.

[–] thelemonalex@lemmy.world 2 points 2 weeks ago

I see, okay, I'll try out containers then. So far, I've been able to migrate a ZFS Pool without issues, so I'll start migrating them all, create a container that manages NFS and see if the existing Docker VM picks up the NFS shares successfully. Thank you for going in-depth and explaining everything to me. I've learnt a lot!