I use duplicati to backup to a secure off site location. Useful for something like vaultwarden.
Most of mine are lightweight so private repos on git.
For big data I have two NAS that sync on the daily.
As others said, use volume mounts, and I incrementally backup those with borg to minimize storage space requirements
Almost everything I run is a Docker container, so I made /var/lib/docker
a btrfs subvolume, and then I make every day incremental snapshots (cron job) and copy them to a secondary disk (also btrfs, using btrbk). Since they are btrfs snapshots they don't use a lot of disk space and if I really need to rollback an entire day I can
I use docker in Proxmox and i backup all container
I use an Ubuntu vm for all my containers in proxmox and make backups of the vm onto my zfs pool
Uuuh...timeshift and borg??
Hey that is the plot to First Contact.
Proxmox Backup Server (PBS) snapshotting all my VM's / LXC's.
External VPS' and anything that can't run PBS-Client I am rsync'ing important data into my home network first, then doing a file based backup of that data to PBS via PBS-Client tool. All this is automated through cron jobs.
Those backups then get sync'd to a 2nd datastore for a bit of redundancy.
Unraid with Duplicacy and Appdata Backup incremental to Backblaze
Cronjobs to backup important folders to a separate disk
Git repo(s) for services & configs with weekly automated commits and pushes
I do the reverse… all configs are ansible scripts and files and I just push them to the servers. That way I can spin up a new machine from scratch, completely automated within minutes… just the time it takes the machine to set itself up.
I use rdiff-backup to backup the volumes directory of my VPS to a local machine via VPN. Containers are stored in some public registry anyways. Also use ansible with all the configurations and container settings.
I use Kopia. The cli is very easy to use and I have backups scheduled nightly. I backup all external mounts and my entire Portainer directory. Has helped in a pinch to restore busted databases.
I point Kopia cli to backup to a WebDAV location I host on my NAS. For off-site backups I run daily backups of that kopia repository to Google cloud.
I'm not sure if Google cloud is the best off-site backup solution, but I did a price comparison when I first selected it and it was the best capacity for the price that I could find at the time.
Haven't used digital ocean but I run 2 Proxmox servers and 2 NAS one of each at a different location.
I backup containers and VMs which run in Proxmox to the NAS via NFS and then have a nightly script to copy the backups from there to my remote NAS. It works, haven't lost any data yet. Still thinking about a third backup in another location as well but money is a thing 🤷
Borgbackup, borgmatic to two backup targets: one in my home and a Hetzner Storage Box. Amongst other things, i include /var/lib/docker/volumes, covering the not-filesystem-bound mounts.
What retention do you run?
I'm setting up the same system, but don't know how far back I need. Currently considering 7 daily backups, so I can restore to any point within the week, and 2-3 Monthly backups in case there's an issue I miss for a real long period.
entirely up to your feelings i guess - i run 7 dailies and 2 weeklies
Use resticker to add an additional backup service to each compose allowing me to customize some pre/post backup actions. Works like a charm 👍
Borg Backup to Hetzner Storage Box.
A few hard drives that are stored offsite and rotate every few weeks.
I just run a pg_dump through kubectl exec and pipe the stdout to a file on my master node. The same script then runs restic to send encrypted backups over to s3. I use the host name flag on the restic command as kind of a hack to get backups per service name. This eliminates the risk of overwriting files or directories with the same name.
I backup all the mounted docker volumes once every hour (snapshots). Additionally i create dumps from all databases with https://github.com/tiredofit/docker-db-backup (once every hour or once a day depending on the database).
ZFS snapshots.
duplicati to take live, crash-consistent backups of all my windows servers and VMs with Volume Shadowcopy Service (VSS)
When backing up Docker volumes, should not the docker container be stopped first.?? I can't se any support for that in the backup tools mentioned.
Yes the containers do need to be stopped. I actually built a project that does exactly that.
Thanks,, I will look into this.
On Proxmox i use for my Backup Solution - Hetzner Storage Bix
I have bind mounts to nfs shares that are backed my zfs pools and last snapshots and sync jobs to another storage device. All containers are ephemeral.
For databases and data I use restic-compose-backup because you can use labels in your docker compose files.
For config files I use a git repository.
Self-Hosted Main
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
For Example
- Service: Dropbox - Alternative: Nextcloud
- Service: Google Reader - Alternative: Tiny Tiny RSS
- Service: Blogger - Alternative: WordPress
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
Useful Lists
- Awesome-Selfhosted List of Software
- Awesome-Sysadmin List of Software