What's your backup strategy?

kat@feddit.nl to Selfhosted@lemmy.world – 84 points –

I see many posts asking about what other lemmings are hosting, but I'm curious about your backups.

I'm using duplicity myself, but I'm considering switching to borgbackup when 2.0 is stable. I've had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).

I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.

107

I use borgbackup + zabbix for monitoring.

At home, I have all my files get backed up to rsync.net since the price is lower for borg repos.

At work, I have a dedicated backup server running borgbackup that pulls backups from my servers and stores it locally as well as uploading to rsync.net. The local backup means restoring is faster, unless of course that dies.

+1 for Borg! I use Borgmatic to backup files and databases to BorgBase. It costs me $80/yr for 1TB of backups which I think is sensible. I also selfhost an instance of Healthchecks.io for monitoring.

moved from borg to restic hourly backup of 6Tb mailserver is just fine

Restic using resticprofile for scheduling and configuring it. I do frequent backups to my NAS and have a second schedule that pushes to Backblaze B2.

Irreplaceable media: NAS->Back blaze NAS->JBOD via duplicacy for versioning

Large ISOs that can be downloaded again, NAS -> JBOD and or NAS -> offline disks.

Stuff that's critical leaves the house, stuff that would just cost me a hell of a lot of personal time to rebuild just gets a copy or two.

I run a restic backup to a local backup server that syncs most of the data (except the movie collection because it's too big). I also keep compressed config/db backups on the live server.

I eventually want to add a cloud platform to the mix, but for now this setup works fine

Restic is great! I run it in a container using mazzolino/restic image hooked up to Backblaze for all my important stuff!

Can anyone ELI5 or link a decent reference? I'm pretty new to self hosting and now that I've finally got most of my services running the way I want, I live in constant fear of my system crashing

I have an external hard drive that I keep in the car. I bring it in once a month and sync it with the server. The data partition is encrypted so that even if it were to get stolen, the data itself is safe.

I have a similar 321 strategy without using someone else's server and needing to traverse the internet. I keep my drive in the pool shed, since if my house was to blow up or get robbed, the shed would probably be fine.

I have an a shed I built a year or two ago, but it's about 100 feet from the house with no electricity to it. I've considered running power and ethernet to it and connecting those drives to a raspberry pi. That way I could rsync my backups over SSH to an "off-site", aka, not in the same building, backup on a more regular basis, and also not have to worry about the potential damage that might occur from hauling them around in a car all the time.

Am I the only one using kopia :)?

Im quite new in selfohsting and backups. I went for duplicaty and it is perfect, but heared bad stories and now I use kopia daily backups to another drive and also to B2. Duplicaty is still doing daily backups, but only few important folders to google drive.

Ive heared only good stories about kopia and no one mentioned it

Large/important volumes on SAN-> B2 Desktop Macs -> Time Machine on SAN & Backblaze (for a few)

Borgbackup is great and what we used for all our servers when they were pets. It's a great tool, very easy to script and use.

I realized at one point that the amount of data that is truly irreplaceable to me amounts to only - 500GB. So for this important data I back up to my NAS, then from there backup to Backblaze. I also create M-Discs. Two sets, one for home and one I keep at a fiends’ place. Then because “why not” and I already had them sitting around I also backup to two sd cards and keep them on site and off site.

I also backup my other data like tv/movies/music/etc but the sheer volume of data gives me one option, that being a couple usb hard drives I back up to from my NAS.

It's still a WIP but that's pretty much where I'm at as well, was going crazy trying to figure out which multi terabyte service I was going to use when in reality the actually irreplaceable stuff falls well under a single TB of data lol. Might go with Backblaze as well.

I have a raspberry pi with an external drive with scripts to rsync each morning. Then I have S3 deep glacier backups for off site.

All devices backup to my NAS either in realtime or at short intervals throughout the day. I use recycling bins for easy restores for accidentally deleted files.

My NAS is set up on a RAID for drive redundancy (Synology RAID) and does regular backups to the cloud for active files.

Once a day I do a hyperbackup to an external HDD.

Once a month I backup to an external drive that lives offsite.

Backups to these external HDDs have versioning, so I can restore files from multiple months ago, if needed.

The biggest challenge is that as my NAS grows, it costs significantly more to expand my backups space. Cloud storage and new external drives aren't cheap. If I had an easy way to keep a separate NAS offsite, that would considerably reduce ongoing costs.

Depending on how much storage do you need (>30 TB?), it may be cheaper to use a colocation service for a server as an offsite backup instead of cloud storage. It's not as safe, but it can be quite cheaper, especially if for some reason you're forced to rapidly download a lot of your data from the cloud backup. (Backblaze b2 costs $0.01/gb downloaded).

Do you have an example or website I could look at for this 'colocation service'?

Currently using idrive as the cloud provider, which is free until the end of the year, but I'm not locked into their service. Cloud backups really only see more active files (<7TB), and the unchanging stuff like my movie or music catalogue seems reasonably safe on offsite HDD backups, so I don't have to pay just to keep those somewhere else.

First I'd like to apologize because I originally wrote less than 30TB instead of more than 30TB, I've changed that in the post.

A colocation is a data center where you pay a monthly price and they'll house your server (electricity and internet bandwidth is usually included unless with certain limits and if you need more you can always pay extra).

Here's an example. It's usually around $99/99€ per 1U server. If you live in/near a big city there's probably at least a data center that offers colocation services.

But as I said, it's only worth it if you need a lot of storage or if you move files around a lot, because bandwidth charges when using object storage tend to be quite high.

For <7 TB it isn't worth it, but maybe in the future.

Thanks for the info. Something to consider as my needs grow 👍

I usually write my own scripts with rsync for backups since I already have my OS installs pretty much automated also with scripts.

Personal files: Syncthing between all devices and a TrueNAS Scale NAS. TrueNAS does snapshots 4 times a day, with a retention policy of 30 days. From there, a nightly sync to Backblaze B2 happens, also with a 30 day retention policy. Occasional manual backups to external drives too.

Homelab/Servers: Proxmox VM and LXC container exports nightly to TrueNAS, with a retention policy of 7 days. A separate weekly export happens to a separate TrueNAS share, that gets synced to B2 weekly, with a retention policy of 30 says. Also has occasional external drive backups.

Main NAS backups to Secondary NAS (onsite - 10G link). Secondary NAS backups up to Offsite (Hetzner server) weekly. Only important data, not Linux ISOs etc.

Do you encrypt your Backups on the Hetzner server?

Yup, both NASs are running TrueNas Scale, which have a setting for Encryption on Cloud Sync Tasks. (Encrypts both filename and file contents)

Is the encryption a builtin thing with TrueNas Scale?

I use a Backuppc instance hosted on an off site server with a 1Tb drive. It connects through ssh to all my vms and backups /home and any other folders i may need. It handles full and incremental backups, deduplication, and compression.

I use.... Timeshift ->Local backup on to my RAID array borgbackup -> borgbase online backup GlusterFS -> experimenting with replicating certain apps across 2 raspberry pi's

Daily offsite to a backup server via restic (+ a self written wrapper for multiple targets). Restic can also run with anything else (sftp, s3 APIs etc). Kinda modern duplicity / borg. Full encrypted and incremental.

I backup locally to my NAS with Synologys Drive software, the NAS does a 10 day rolling snapshot of the backup folder. First I then had Hyper Backup set up to do a versioned backup from the NAS to a cloud provider.

But I got scared of the thought that a corruption would propagate through the whole backup chain. So now I do an additional backup for the most important stuff directly from my PC with restic + resticprofile to a Hetzner storage box. I know they do not give any promises about data reliability, but I think chances of the local and remote backup breaking at the same time are pretty slim.

Restic is sending a fail/done ping to an uptime-kuma instance I host myself to monitor the backup which then notifies me with ntfy if backups fail or are missed for a couple of days.

321 strategy: 3 copies of everything important, 2 on-site, 1 in cloud. I have a TrueNAS Scale NAS running RAID5 on ZFS. All the laptops, desktops, etc. backup to the NAS. (Mostly Macs, so we use time machine over the network). So the original laptop/desktop is 1 copy. The NAS is a second copy on-site, and then TrueNAS has lots of cloud options. I use Amazon S3 myself, but there are lots of choices.

Prior to this I had a Synology NAS. It was "small" (6TB), so it has a RAID mirror of 6TB drives and a single 6TB external USB that had a backup of the mirrored pair (second copy on-site). Then I also used Synology's software to backup to S3.

For my Internet-facing VMs, they all run in xcp-ng and I use Xen Orchestra to manage them. I run regular snapshots nightly, and then use NFS to copy them to a cloud server. That's sloppy, and sometimes doesn't work. So the in-the-house stuff is backed up well. The VMs are mostly relying on Xen snapshots and RAID 5.

Backup everything locally in proxmox on separate storage, another copy to a local nas and a third one to backblazes cloud storage.

I use RSnapshot and make incremental backups to an external harddrive, and (I know it's not a backup) run my two RAIDs (one for media, one for general data) in mirrored mode.

When I eventually upgrade my home server, I will upgrade from 2x2 2TB drives in RAID1 to four 8TB drives in either RAID5 or 6 - I am still undecided if I am willing to sacrifice 4TB of capacity to the redundancy gods and get an extra harddrive that can fail without data loss in return.

Personally I do:

  • Daily snapshots of my data + Daily restic backup on-site on a different machine
  • Daily VM/containers snapshot locally and on a different machine, keeping at least 2 monthly, 2 weekly and 2 daily backups
  • Weekly incremental data backup in an immutable B2 bucket, with a new bucket every month and a 6 month immutability (so data can't be changed/erased for 6 month)
  • Weekly incremental data backup on an other off-site machine
  • Monthly (but I should start doing it weekly) backup of important data (mainly documents and photos) on removable medias that I keep offline in a fire-proof safe

Maybe it's overkill, maybe it's not enough, I'll know when something fail and I am screwed, ahah

As a note, everybody should test/check their backup frequently. I once had an issue after changing an IP address and figured out half my backups where not working 6 month later...

How do you approach testing your backups? It seems like you shouldn't just restore it to the various applications because if it fails then you're screwed. But it also seems like a huge pain to create duplicate instances of every application to test the backup.

I do restore my VMs to deplicate VMs to test from time to time (it's pretty easy with Proxmox) but I use Restic for data backups which encrypts the data before uploading it, so one should restore a backup to a different folder to ensure the data integrity and that you didn't forget your keys ahah

You don't have to do it every week or month, but it's worth doing it a few times a year or when you change something!

I back up my home folder to an encrypted drive once a week using rsync, then I create a tarball, encrypt it, and upload it to protondrive just in case.

My critical files and folders are synced from my mas to my desktop using syncthing. From there I use backblaze to do a full desktop backup nightly.

My Nas is in raid 5, but that's technically not a backup.

All nextcloud data gets mirrored with rsync to a second drive, so it's in 3 places, original source and twice on the server

Databases are backed up nightly by webmin to second drive

Then installations, databases etc are sent to backblaze storage with duplicati

For PCs, Daily incremental backups to local storage, daily syncs to my main unRAID server, and weekly off-site copies to a raspberry pi with a large external HDD running at a family member's place. The unRAID server itself has it's config backed up to the unRAID servers and all the local docker stores also to the off-site pi. The most important stuff (pictures, recovery phrases, etc) is further backed up in Google drive.

I use Borgbackup 1.2.x. It works really well. Significantly faster than Duplicity. Borg uses block-level deduplication instead of doing incremental backups, meaning the backup won't grow indefinitely like with duplicity (this is why you have to periodically do a full backup with Duplicity). The Borg server has an "append-only" mode meaning the client can only add data to the backup and not remove it - this is useful because if an attacker were to gain access to the client, they can't delete all your backups. This is a common issue with other backup systems - the client has full access to the backup, so there's nothing stopping an attacker from erasing the client system plus all its backups.

For storing the backups, I have two storage VPSes - One with HostHatch in Los Angeles ($10/month for 10TB space) and one with Servarica in Montreal Canada (3.5GB space for $84/year).

Each system being backed up performs the backup twice - Once to each VPS. Borgbackup recommends this approach over only performing one backup then rsyncing it to a different server. The idea is that if one backup gets corrupted (or deleted by an attacker, etc), the other one should still be OK as it's entirely separate.

I'm paying Google for their enterprise gSuite which is still "unlimited", and using rclone's encrypted drive target to back up everything. Have a couple of scripts that make tarballs of each service's files, and do a full backup daily.

It's probably excessive, but nobody was ever mad about the fact they had too many backups if they needed them, so whatever.

Holy crap. Duplicity is what I've been missing my entire life. Thank you for this.

In short: crontab, rsync, a local and a remote raspberry pi and cryptfs on usb-sticks.

For my server I use duplicity, with a daily incremental backup and sending the encrypted diffs away. I researched a few more options some time ago but nothing really fit my use case, but I'm also not super happy with duplicity. Thanks for suggesting borgbackup.

For my personal data I have a NextCloud on a RPi4 at my parents' place, which also syncs between my laptop that I've left there. For an offline and off-site storage, I use the good old strategy where I bring over an external hard drive, rsync it, and bring it back.

No problem! I also see Restic a lot in this thread, so I'll probably try both at some point

I feel the exact same. I've been using Duplicacy for a couple years, it works, but don't totally love it.

When I researched Borg, Restic, others, there were issues holding me back for each. Many are CLI-driven, which I don't mind for most tools. But when shit hits the fan and I need to restore, I really want to have a UI to make it simple (and easily browse file directories).

Got a Veeam community instance running on each of my VMware nodes, backing up 9-10 VMs each.

Using Cloudberry for my desktop, laptop and a couple Windows VMs.

Borg for non-VMware Linux servers/VMs, including my WSL instances, game/AI baremetal rig, and some Proxmox VMs I've got hosted with a friend.

Each backup agent dumps its backups into a share on my nas, which then has a cron task to do weekly uploads to GDrive. I also manually do a monthly copy to an HDD and store it off-site with a friend.

Rsync script that does deltas per day using hardlinks. Found on the Arch wiki. Works like a charm.

I don't backup my personal files since they are all more or less contained in Proton Drive. I do run a handful of small databases, which i back up to ... telegram.

Ah, yes, the ole' "backup a database to telegram" trick. Who hasn't used that one?!?

I use syncthing to sync files between phone, pc and server.

The server runs proxmox, with a proxmox backup server in VM. A raspberry pi pulls the backups to an usb ssd, and also rclone them to backblaze.

Syncthing is nice. I don't backup my pc, as it is done by the server. Reinstalling the pc requires almost no preparation, just set up syncthing again

I use rclone to encrypt and send my most valuable data to OneDrive.

In the process of moving stuff over to Backblaze. Home PCs, few clients PCs, client websites all pointing at it now, happy with the service and price. Two unraid instances push the most important data to an azure storage a/c - but imagine i'll move that to BB soon as well.
Docker backups are similar to post above, tarball the whole thing weekly as a get out of jail card - this is not ideal but works for now until i can give it some more attention.

*i have no link to BB other than being a customer who wanted to reduce reliance on scripts and move stuff out of azure for cost reasons.

Would I be correct to assume you are using Backblaze PC backup rather than B2?

Yes, for now. I'll be spinning up some B2 this week however.

restic + rclone crypt + whatever storage server/service is good enough. currently using hetzner storage for my backups. because they've auto snapshots on top of my backups.

I also use this setup for backups on servers, not only at home

On my home network, devices are backed up using Time Machine over the network. I also use Backblaze to make a second backup of data to their cloud service, using my own private key. Lastly, I throw some backups on a USB drive that I keep in a fire safe.

I usually just use Restic (not just for servers), for big databases i pipe pg_dump directly into it, and for even bigger ones, i recently moved to pgBackRest.

I ping to a selfhosted Healthchecks instance to see if my backups still run. (or the other way around)

On my main desktop (which recently became a mac, i am sorry) – i currently use Autorestic for multiple locations... its nice to have that yaml, but – well – i am used to bashscripts anyway so it is not that big of a benefit i guess.. .

I use backupninja for the scheduling and management of all the processes. The actual backups are done by rsync, rdiff, borg, and the b2 tool from backblaze depending on the type and destination of the data. I back up everything to a second internal drive, an external drive, and a backblaze bucket for the most critical stuff. Backupninja manages multiple snapshots within the borg repository, and rdiff lets me only copy new data for the large directories.

All of my servers have shell scripts that rsync important stuff to a subdirectory. Other scripts run database dumps a couple of times a day.

My primary server at home then rsyncs my servers' backup subdirectories to its own, broken out by FQDN.

Leandra then uses Restic to back everything up (herself as well as the other servers' backups) to Backblaze B2 on a two year cycle.

For smaller backups <10GB ea. I run a 3 phased approach

  • rsync to a local folder /srv/backup/<service>
  • rsync that to a remote nas
  • rclone that to a b2 bucket

These scripts run on the cron service and I log this info out to a file using --log-file option for rsync/rclone so I can do spot checks of the results

This way I have access to the data locally if the network is down, remotely on a different networked machine for any other device that can browse it, and finally an offsite cloud backup.

Doing this setup manually through rsync/rclone has been important to get the domain knowledge to think about the overall process; scheduling multiple backups at different times overnight to not overload the drive and network, ensuring versioning is stored for files that might require it and ensuring I am not using too many api calls for B2.

For large media backups >200GB I only use the rclone script and set it to run for 3hrs every night after all the more important backups are finished. Its not important I get it done asap but a steady drip of any changes up to b2 matters more.

My next steps is to maybe figure out a process to email the backup logs every so often or look into a full application to take over with better error catching capabilities.

For any service/process that has a backup this way I try and document a spot testing process to confirmed it works every 6months:

  • For my important documents I will add an entry to my keepass db, run the backup, navigate to the cloud service and download the new version of the db and confirm the recently added entry is present.
  • For an application I will run through a restore process and confirm certain config or data is present in the newly deployed app. This also forces me to have a fast restore script I can follow for any app if I need to do this every 6months.

My important data is backed up via Synology DSM Hyper backup to:

  • Local external HDD attached via USB.
  • Remote to backblaze (costs about $1/month for ~100gb of data)

I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.

My important data is backed up via Synology DSM Hyper backup to:

  • Local external HDD attached via USB.
  • Remote to backblaze (costs about $1/month for ~100gb of data)

I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.

I have a central NAS server that hosts all my personal files and shares them (via smb, ssh, syncthing and jellyfin). It also pulls backups from all my local servers and cloud services (google drive, onedrive, dropbox, evernote, mail, calender and contacts, etc.). It runs zfs raid 1 and snapshots every 15 minute. Every night it backs up important files to Backblaze in a US region and azure in a EU region (using restic).

I have a bootstrap procedure in place to do a "clean room recovery" assuming I lost access to all my devices - i only need to remember a tediously long encryption password for a small package containing everything needed to recover from scratch. It is tested every year during Christmas holidays including comparing every single backed and restored file with the original via md5/sha256 comparison.

I'm moving from rsync+duplicity+borg towards bupstash

I back up everything to my home server... then I run out of money and cross my fingers that it doesn't fail.

Honestly though my important data is backed up on a couple of places, including a cloud service. 90% of my data is replaceable, so the 10% is easy to keep safe.

I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it's disposable (I don't backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn't matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.