imho it's going to give you more troubles than just a direct upgrade. You will end with a "temporary" server that will last years
I'm not afraid of that risk and just try to mitigate possible outage of services longer, than expected (I hope to refresh the server in 1 day, but as with everything new, it can take longer).
What do you mean by "direct upgrade"? I want to have every direct modification done on OS to be done troguht ansible playbook on refreshed build, so my assumption is that I have to purge / start from new SSD in that case.
Replace the drive and start new but anyway because they're docker containers you can mount the old drive and temporarily run them again
how many devices do you need to update?
ansible wants to have a home base and an inventory of devices to manage. for example, if you have a flock of Rasberry Pi's and a server stashed under a desk somewhere, yes, ansible is 100% going to simplify your life.
ansible mgmt from a device to that same device.... It might be just as easy to make backups and track your file deltas. the temptation is to use ansible so you remember what changes you made, but it can be a pia when you need to do a quick shift and have to go thru the playbook (unless you have playbooks on the ready).
I've got personal laptop + mentioned Dell Wyse 5070. In near future (months) I'm thinking about extending to another home server client.
I know using ansible in that scenario will be somehow harder than direct ssh, but I want mainly to learn the process (for future work possibilities) and have that extended control of the changes on the bare OS.
Awesome, go for it! ansible (more or less) is directed ssh. inventory, role, playbooks + templates, etc; for learning, definitely go for it! if you were to roll your own automation framework, you'd end up w/ansible.
If this is a Linux based OS, a reinstall is rarely needed. There are many ways to migrate it from an old drive to a new one. Cloning the old one to the new and expanding the main partition to occupy the extra space is one. A cleverer way would be to move it to LVM so that next time you'll have options for expanding by adding more drives. If the new drive is double the size, you could clone, boot from the new then setup LVM in the empty space, migrate the OS to it, then boot from that, followed by reclaiming the cloned space and adding it to the LVM.
There's nothing particularly wrong with running it in a VM as you suggested either. Or you could run the new install in a VM then install it in the server once ready.
what you are attempting is called high availability; it might not be worth it; usually would need three different physical devices (in a homelab situation)...a load balancer to route traffic, and two nodes to handle said traffic. to perform your storage upgrade, you pull one device out of the load balancer, do your upgrade, and then add it back in. then, you do the same for the other load balancer. this would have 100% service availability...but this is a lot of work for a one-person show!
do that for fun - you do you. however, if you can handle a few hours of downtime and don't want to burden yourself with the long time care+feeding the above setup will require...
remember you can use USB boot, mount both your drives, and then if you are lucky, your distro (on USB) will have a disk management/cloning utility.
click click click, boom...you have bit perfect copy of small M2 on to large M2.
Do not change your small M2! power down, swap 'em, and power on! if it doesn't work, you still have your OG M2 to boot from.
there are backup/restore utilities and other ways, each taking more and more time...but M2 is pretty quick.
Sounds like it would be easier to run your VM on the laptop, leave the SSD in the 5070, and move each service over to the laptop one at a time. Then nuke and repave the 5070 with the upgraded drive, and then move the services back.
Ansible is great, but I'd leave learning that as a separate project in the future. Convert to docker compose as part of this process if you're not already doing that.
Moving services one by one could be a solution but required additional networking - currently I've got Tailscale domains for mentioned critical ones (Nextcloud, Matrix) and I thought about running from the same storage could keep the same hostnames of Tailscale nodes (if that even possible). What do you think about that issue?
Currently I've got all of the docker containers defined troguht Portainer Stacks, so I could easily covert it to base docker compose files.
I'm not clear on how your tailscale names are attached to the services. Do you mean you've got a different Tailscale magic DNS for each docker container with a sidecar?
I'm not a Tailscale expert, all my services are in VM's or LXC's so they get their own Tailscale name that moves with them. Perhaps Tailscale allows you to add extra names for the same host or something?
I've got few docker containered Tailscale deamons as a sidecar, exactly. Those should work as the same host (login based), as in the original home server. I'm mostly unsure, how the whole "VM environment" with attached drive with whole docker engine install will perform - also in terms of connection.
So their names would come across with them. In what I'm proposing, you wouldn't worry about attaching the drive. Just copy the data for one service over, then start it's container on the laptop. Once that's all working fine, do the rest one at a time till they're all on the laptop. Then wipe your Dell and start from scratch.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
DNS | Domain Name Service/System |
LXC | Linux Containers |
SSD | Solid State Drive mass storage |
3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.
[Thread #180 for this sub, first seen 1st Oct 2023, 12:45] [FAQ] [Full list] [Contact] [Source code]
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!