How do you guys back up your server?

juliette@pawb.social to Selfhosted@lemmy.world – 131 points –

I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?

119

Backblaze on a B2 account. 0.005$ per gb. You pay for the storage you use. You pay for when you need to download your backup.

On my truenas server, it's easy as pie to setup and easy as 🥧 to restore a backup when needed.

I also recommend B2, it’s an S3 compatible service so any backup software/scripts/plugins that work with S3 should work with Backblaze.

B2 is awesome. I have Duplicati set up on OpenMediaVault to backup my OS nightly to B2 (as well as a local copy to the HDD).

1 more...

On hope

This guy is rawdogging his RPi, just like me

Me too! Actual servers are docker-compose which is on git but the data…yeah that’s on hope hahaha

You guys back up your server?

If your data is replaceable, there’s not much point unless it’s a long wait or high cost to get it back. It’s why I don’t have many backups.

In the 20 years that I've been running a home server I've never had anything more than a failed disk in the array which didn't cause any data loss.

I do have backups since it's a good practice and also because it familiarizes me with the software and processes as they change and update so my skillset is always fresh for work purposes.

ITT: lots of the usual paranoid overkill. If you do rsync with the --backup switch to a remote box or a VPS, that will cover all bases in the real world. The probability of losing anything is close to 0.

The more serious risk is discovering that something broke 3 weeks ago and the backups were not happening. So you need to make sure you are getting some kind of notification when the script completes successfully.

While I don't agree that using something like restic is overkill you are very right that backup proess monitoring is very overlooked. And recovering with the backup system of your choice is too.

I let my jenkins run the backup jobs as I have it running anyways for development tasks. When a job fails it notifies me immediately via email and I can also manually check in the web ui how the backup went.

3-2-1

Three copies. The data on your server.

  1. Buy a giant external drive and back up to that.

  2. Off site. Backblaze is very nice

How to get your data around? Free file sync is nice.

Veeeam community version may help you too

I'm not sure how you understand the 3-2-1 rule given how you explained it, even though you're stating the right stuff (I'm confused about your numbered list..) so just for reference for people reading that, it means that your backups need to be on:

  • 3 copies
  • 2 mediums
  • 1 offsite location

Huh. I always heard 3 copies, 2 locations, 1 of the locations offsite. Yours makes sense though.

cronjobs with rsync to a Synology NAS and then to Synology's cloud backup.

Autorestic, nice wrapper for restic.

Data goes from one server to second server, and vice versa (different provider, different geolocation). And to backblaze B2 - as far as I know cheapest s3-like storage

Wasabi might also be worth mentioning, a while back I compared S3-compatible storage providers and found them to be cheaper for volumes >1TB. They now seem to be slightly more expensive (5.99$ vs. 5$), but they don't charge for download traffic.

My server runs Plex and has almost 50 TB of video on it. After looking at all the commercial backup options I gave up on backing up that part of the data. :-(

I do backup my personal data, which is less than a terrabyte at this point. I worked out an arrangement with a friend who also runs a server. We each have a drive in the other's server that we use for backup. Every night cron runs a simple rsync script to do an incremental backup of everything new to the other machine.

This approach cost nothing beyond getting the drives. And we will still have our data even if one of the servers is physically destroyed and unrecoverable.

I also have a decent amount of video data for Plex (not nearly 50TB, but more than I want I pay to backup). I figure if worst comes to worst I can rip DVD/BluRays again (though I’d rather not) so I only backup file storage from my NAS that my laptops and desktop backup to. It’s just not worth the cost to backup data that’s fairly easy to replace.

Yeah, that was where I finally came out too. I still own the discs. My only worry is that some of my collection is beginning to age. I've had a few DVDs that were no longer readable.

Oh that whith the friend's server is a good idea. Mutual benefit at little extra cost

It's the only "no cost" option I know of that provides an off-site backup. And once it occurred to me, it was really easy to set up.

I am lucky enough to have a second physical location to store a second computer, with effectively free internet access (as long as the data volume is low, under about 1TB/month.)

I use the ZFS file system for my storage pool, so backups are as easy as a few commands in a script triggered every few hours, that takes a ZFS snapshot and tosses it to my second computer via SSH.

I have everything in its own VM, and Proxmox has a pretty awesome built in backup feature. Three different backups (one night is to my NAS, next night to an on-site external, next night to an external that's swapped out with one at work - weekly). I don't backup the Proxmox host because reinstalling it should it die completely is not a big deal. The VM's are the important part.

I have a mini PC I use to spot check VM backups once a month (full restore on its own network, check its working, delete the VM after).

My Plex NAS only backs up the movies I really care about (everything else I can "re-rip from my DVD collection").

So what method do you use to backup everything?

Depends on what OS that server is running. Windows, Unraid, Linux, NAS (like Synology or QNAP), etc.

There are a bazillion different ways to back up your data but it almost always starts with "how is your data being hosted/served?"

Restic to multiple repositories, local and remote.

Various different ways for various different types of files.

Anything important is shared between my desktop PC's, servers and my phone through Syncthing. Those syncthing folders are all also shared with two separate servers (in two separate locations) with hourly, daily, weekly, monthly volume snapshotting. Think your financial administration, work files, anything you produce, write, your main music collection, etc... It's also a great way to keep your music in sync between your desktop PC and your phone.

Servers have their configuration files, /etc, /var/log, /root, etc... rsynced every 15 minutes to the same two backup servers, also to snapshotted volumes. That way, should any one server burn down, I can rebuild it in a trivial amount of time. This also goes for user profiles, document directories, ProgramData, and anything non-synced on windows PC's.

Specific data sets, like database backups, repositories and such are also generally rsynced regularly, some to snapshotted volumes, some to regulars, depending on the size and volatility of the data.

Bigger file shares, like movies, tv-shows, etc... I don't backup, but they're stored on a distributed GlusterFS, so if any one server goes down, that doesn't lose me everything just yet.

Hardware will fail, sooner or later. You should see any one device as essentially disposable, and have anything of worth synced and archived automatically.

I run everything in docker. I have an ansible playbook that backs up all the docker volumes to a minio server I'm running on a separate machine. I periodically upload backups to idrivee2 with the same playbook

Proxmox Backup Server. It's life-changing. I back up every night and I can't tell you the number of times I've completely messed something up only to revert it in a matter of minutes to the nightly backup. You need a separate machine running it--something that kept me from doing it for the longest time--but it is 100% worth it.

I back that up to Backblaze B2 (using Duplicati currently, but I'm going to switch to Kopia), but thankfully I haven't had to use that, yet.

PBS backs up the host as well, right? Shame Veeam won't add Proxmox support. I really only backup my VMs and some basic configs

Veeam has been pretty good for my HyperV VMs, but I do wish I could find something a bit better. I've been hearing a lot about Proxmox lately. I wonder if it's worth switching to. I'm a MS guy myself so I just used what I know.

PBS only backs up the VMs and containers, not the host. That being said, the Proxmox host is super-easy to install and the VMs and containers all carry over, even if you, for example, botch an upgrade (ask me how I know...)

Then what's the purpose over just setting up the built in snapshot backup tool, that unlike PBS can natively back up onto an SMB network share?

I'm not super familiar with how snapshots work, but that seems like a good solution. As I remember, what pushed me to PBS was the ability to make incremental backups to keep them from eating up storage space, which I'm not sure is possible with just the snapshots in Proxmox. I could be wrong, though.

You are right about the snapshots yeah. The built in backup doesn't seem to do incremental backups.

The simplicity of containerized setup:

  • docker-compose and kubernetes yaml files are preserved in a git repo
  • nightly cron to create database dumps
  • nightly cron to run rsync to backup data volumes and database dumps to rsync.net

Hourly backups with Borg, nightly syncs to B2. I've been playing around with zfs snapshots also, but I don't rely on them yet

I use Duplicati and backup server to both another PC and the cloud. Unlike a lot of data hoarders I take a pretty minimalist approach to only backing up core (mostly docker) configs and OS installation.

I have media lists but to me all that content is ephemeral and easily re-acquired so I don't include it.

Duplicati is great in many ways but it's still considered as being in beta by it's developers. I would not trust it if the data you back up is extremely important to you.

For config files, I use tarsnap. Each server has its own private key, and a /etc/tarsnap.list file which list the files/directories to backup on it. Then a cronjob runs every week to run tarsnap on them. It's very simple to backup and restore, as your backups are simply tar archives. The only caveat is that you cannot "browse" them without restoring them somewhere, but for config files it's pretty quick and cheap.

For actual data, I use a combination of rclone and dedup (because I was involved in the project at some point, but it's similar to Borg). I sync it to backblaze because that's the cheapest storage I could find. I use dedup to encrypt the backup before sending it to backblaze though. Restoration is very similar to tarsnap:

dup-unpack -k keyfile snapshot-yyyymmdd | tar -C / -x [files..] .

Most importantly, I keep a note on how to backup/restore: Backup 101

I use proxmox server and proxmox backup server (in a VM 🫣) to do encrypted backups.

A raspberry pi has ssh access to PBS and it rsync all the files, and then uploads them to backblaze using rclone.

https://2.5admins.com/ recommended "pull" backups, so if someone hacks your server they don't have access to your backups. If the pi is hacked it can mess with everything, but the idea is that is has a smaller attack surface (just ssh).

PS. If you rclone a lot of files to backblaze use https://rclone.org/docs/#fast-list , or else it will get expensive

Veeam Agent going to a NAS on-site and the NAS is backed up nightly to IDrive because it's the cheapest cloud backup service I could find with Linux support. It's a bit slow, very CPU-bound, but it's robust and their support is pretty responsive.

Borgbackup, using borgmatic as a frontend, to a storage VPS. I backup dozens of machines this way. I simply add a user account for each machine on the VPS, then each machine backs up over ssh to its own account.

Rsnapshot on a second server, saving 7 daily backups, 4 weekly backups, and 6 mk they backups

Same setup!

Additionally, about once a month I will grab a copy of the latest daily backup to an USB drive and store it away from the server

I'm lucky enough that my backup server is at my parent's place I'm their basement, so it's off-site by already

restic backup to Azure and Backblaze

ZFS array using striping and parity. Daily snapshots get backed up to another machine on the network. 2 external hard drives with mirrors of the backup rotate between my home and office weekly-ish.

I can lose 2 hard drives from the array at the same time without suffering data loss. Any accidentally deleted files can be restored from a snapshot if my house is hit by a meteor I lose maximum of 3-4 days of snapshots.

I use duplicacy to backup to my local NAS and to Storj.io. In case of a fire I'm always able to restore my files. Storj.io is cheap, easy to access from any location and your files are stored and duplicated on multiple different locations.

+1 for Duplicacy. Been using it solidly for nearly 6 years - with local storage, sftp, and cloud. Rclone for chonky media. Veeam Agent for local PC backups as a secondary method.

Using ESXi as a hypervisor , so I rely on Veeam. I have copy jobs to take it from local to an external + a copy up to the cloud.

A simple script using duplicity to FTP data on my private website with infinite storage. I can't say if it's good or not. It's my first time doing it.

How do you have infinite storage? Gsuite?

I confirm that in the terms and condition they discourage the use as a private cloud backup and only to host stuff related to the website. Now.. until now I've had no complaints as I've been paying and kept the traffic at minimum. I guess I'll have to switch to some more cloud oriented version if I keep expanding. But it's worked for now !

Proxmox backs up the VMs -> backups are uploaded to the cloud.

I run everything in containers, so I rsync my entire docker directory to my NAS, which in turn backs it up to the cloud.

Running a Duplicacy container backing up to Google drive for some stuff and Backblaze for mostly all other data. Been using it for a couple years with no issues. The GUI and scheduling is really nice too.

I use Bacula to an external drive, it was a pain in the ass to configure but once it's running its super reliable and easily extended to other drives or folders

Cronjobs and rclone have been enough for me for the past year or so. Interestingly, I've only needed to restore from a backup once after a broken update. It felt great fixing that problem so easily.

I have 2 servers that backup to each other. I also use B2 for photos and important stuff.

Almost all the services I host run in docker container (or userland systemd services). What I back up are sqlite databases containing the config or plain data. Every day, my NAS rsyncs the db from my server onto its local storage, and I have Hyper Backup backup the backups into an encrypted S3 bucket. HB keeps the last n versions, and manages their lifecycle. It's all pretty handy!

My server is a DiskStation, so I use HyperBackup to do an encrypted backup of the important data to their Synology C2 service every night.

I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone's happy.

For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.

For virtual servers I just use the proxmox built in backup system which works great.

Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.

I've also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it'll accept data I shove a copy of something on it, label and document it. There's so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I'll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.

Kopia to Backblaze B2 is what I generally use for off-site backups of my devices. Borg's another good option to look at, but not as friction-less in my experience. There are a couple of additional features that are available in Kopia that are nice to have and are not in Borg (i.e. error correction, file de-duplication) from what I recall. edit: borg does do de-duplication

rsync + borg, but looking at bupstash

My home servers a windows box so I use Backblaze which has unlimited storage for a reasonable fixed price. Have around 11TB backed up. Pay the extra few dollars for the extended 12 month retention of deleted files, which has saved me a few times when I needed to restore a file I couldn’t find.

Locally I run stablebit DrivePool and content is mirrored and pooled using that, which covers me for drive failures.

If you are using kubernetes, you can use longhorn to provision PVCs. It offers easy S3 backup along with snapshots. It has saved me a few times.

3-2-1

Three copies. The data on your server.

  1. Buy a giant external drive and back up to that.

  2. Off site. Backblaze is very nice

How to get your data around? Free file sync is nice.

Veeeam community version may help you too

I have an rsync script that pulls a backup every night from my truenas server to my Synology.

I've been thinking about setting up something with rsync.net so I have a cloud copy of my most important files.

I’m backing up my stuff over to Storj DCS (basically S3 but distributed over several regions) and it’s been working like a charm for the better part of a year. Quite cheap as well, similar to Backblaze.

For me the upside was I could prepay with crypto and not use any credit card.

dont overthink it.. servers/workstations rsync to a nas, then sync that nas to another nas offsite.

For my webserver, mysqldump to a secured folder, then restic backup the whole /svr folder, then rsync the restic backup to another server. Also have a system that emails me if these things don't happen daily. The log files are uploaded to a url, the log file is checked for simple errors, and if no file is uploaded in time, email.

Of course, in my case, the url files are uploaded to - and the email server... are the same server I'm backing up... but at least if that becomes a problem, I probably only need the backups I've already made to my second server.

Compressed pg_dump rsync’ed to off-site server.

I backup using a simple rsync script to a Hetzner storage box.

Zfs z2 pool . Not a perfect backup, but it covers disk failure (already lost one disk with no data loss), and accidental file deletion. I'm vulnerable to my house burning down, but overall I sleep well enough.

It’s kind of broken at the moment, but I have set up duplicity to create encrypted backups to Bacblaze B2 buckets.

Of course the proper way would be to back up to at least 2 more locations. Perhaps a local NAS for starters. Also could be configured in duplicity.

  • kopia backup to 2nd disk
  • kopia backup to B2 cloud
  • duplicaty backup to google drive (only most important folder <1GB)

Most of the files are actually nextcloud so I get one more copy of files (not backup) on PC by syncing with nextcloud app

Running a Duplicacy container backing up to Google drive for some stuff and Backblaze for mostly all other data. Been using it for a couple years with no issues. The GUI and scheduling is really nice too.

All my backups are in /home/Ryan/Documents. Please don't break my Minecraft server.

Not what you mean but I use BDR shadow protect and Datto. Depending on customers budget.

Veeam backup and recovery notnfor retail license covers up to 10 workloads. I then s3 offsite to backblaze

TrueNAS zfs snapshots, and then a weekly Cron rsync to a servarica VPS with unlimited expanding storage.

If you use a VPS as a backup target, you can also format it with ZFS and use replication. Sending snapshots is faster than using file-level backup tool, especially with a lot of small files.

Interesting, I have noticed it's very slow with initial backups. So snapshot replication sends one large file? What if you want to recover individual files?

  • kopia backup to 2nd disk
  • kopia backup to B2 cloud
  • duplicaty backup to google drive (only most important folder <1GB)

Most of the files are actually nextcloud so I get one more copy of files (not backup) on PC by syncing with nextcloud app