As i'm at a point of increasing my actual 4 server count, how should I handle storing the OS of each server?
Problem: I have no money and I'm looking for a way to buy more servers without buying and taking care of new boot ssds.
The actual setup is that each server has their own 120/240gb ssd to boot from, and one of my servers is a NAS.
at first I thought of PXE persistent boot, but how would I assign each machine their own image is one of the problems..
I've found this post talking about Disk-less persistent PXE, but it's 5 years old, it talks about SAN booting, but most people I've seen in this sub are against fiber-channel protocol, probably there's a better way?
Without mentioning speed requirements (like a full-flash NAS or 10+gbit), Is it possible to add more servers, without purchasing a dedicated boot device for each one?
How? Are you loading a configuration in a device plugged in each hypervisor server? Any project i should read further?
The servers use their built-in NIC's PXE to load iPXE (I still haven't yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.
Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).
iPXE is the another component: https://ipxe.org/
It's worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the "core" of my homelab, and everything else stores its shit on them.