I tried to use it via tailscale but it disconnects very easily - is to be expected?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
But if you're working with Linux you're going to need to know it.
Nope. I never have needed to know it. I only ever used it because I was either curious to know how to use it or because it was more convenient than other solutions. But scp is basically just as convenient.
I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.
Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.
At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But... backups are incredibly important and it is very much important to understand what a backup actually needs to be.
I would generally argue that rsync is not a backup solution.
Yeah, if you want to use rsync specifically for backups, you're probably better-off using something like rdiff-backup
, which makes use of rsync to generate backups and store them efficiently, and drive it from something like backupninja
, which will run the task periodically and notify you if it fails.
rsync
: one-way synchronization
unison
: bidirectional synchronization
git
: synchronization of text files with good interactive merging.
rdiff-backup
: rsync
-based backups. I used to use this and moved to restic
, as the backupninja
target for rdiff-backup
has kind of fallen into disrepair.
That doesn't mean "don't use rsync
". I mean, rsync
's a fine tool. It's just...not really a backup program on its own.
Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every "snapshot" you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every "file" in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it's not a backup.
(OTOH, rsync is still wonderful for large transfers.)
Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
I think that you may be thinking of rsnapshot
rather than rdiff-backup
which has that behavior; both use rsync
.
But I'm not sure why you'd be concerned about this behavior.
Are you worried about inode exhaustion on the destination filesystem?
Huh, I think you're right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
Ive personally used rsync for backups for about....15 years or so? Its worked out great. An awesome video going over all the basics and what you can do with it.
And I generally enjoy Veronica's presentation. Knowledgable and simple.
Her https://tinkerbetter.tube/w/ffhBwuXDg7ZuPPFcqR93Bd made me learn a new way of looking at data. There was some tricks I havent done before. She has such good videos.
Veronica is fantastic. Love her video editing, it reminds me more of the early days of YouTube.
It works fine if all you need is transfer, my issue with it it's just not efficient. If you want a "time travel" feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.
If you want a “time travel” feature, your only option is to duplicate data.
Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.
Isn't that creating hardlinks between source and dest? Hard links only work on the same drive. And I'm not sure how that gives you "time travel", as in, browsing snapshots or file states at the different times you ran rsync.
Edit: ah the hard link is between dest and the link-dest argument, makes more sense.
I wouldn't bundle fs and backup compression in the same bucket, because they have vastly different reqs. Backup compression doesn't need to be optimized for fast decompression.
Snapper and BTRFS. Its only adjusts changes in data, so time travel is just pointing to what blocks changed and when, and not building a duplicate of the entire file or filesystem. A snapshot is instant, and new block changes belong to the current default.
Yeah it’s slow
What's slow about async? If you have a reasonably fast CPU and are merely syncing differences, it's pretty quick.
rsnapshot is a script for the purpose of repeatedly creating deduplicated copies (hardlinks) for one or more directories. You can chose how many hourly, daily, weekly,... copies you'd like to keep and it removes outdated copies automatically. It wraps rsync and ssh (public key auth) which need to be configured before.
Hardlinks need to be on the same filesystem, don't they? I don't see how that would work with a remote backup...?
I've been using borg because of the backend encryption and because the deduplication and snapshot features are really nice. It could be interesting to have cross-archive deduplication but maybe I can get something like that by reorganizing my backups. I do use rsync for mirroring and organizing downloads, but not really for backups. It's a synchronization program as the name implies, not really intended for backups.
I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we're already talking about rsync, I guess I may as well ask if this is right way to go?
I couldn't tell you if it's the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there's new data
The thing I hate most about rsync is that I always fumble to get the right syntax and flags.
This is a problem because once it’s working I never have to touch it ever again because it just works and keeping working. There’s not enough time to memorize the usage.
Tangentially, I don’t see people talk about rclone a lot, which is like rsync for cloud storage.
It’s awesome for moving things from one provider to another, for example.
Use borg/borgmatic for your backups. Use rsync to send your differentials to your secondary & offsite backup storage.