What backup service do you use?

Landrin201@lemmy.ml to Selfhosted@lemmy.world – 109 points –

I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?

111

Borgbase with Borgmatic (Borg) as the Software. As far as I know the whole Borgbase Service is from a Homelab guy (with our needs in mind).

Also 3-2-1 rule!

Regardless of service, if you don't test your backups, you have none.

Ehhh I would say then you have probabilistic backups. There's some percent chance they're okay, and some percent chance they're useless. (And maybe some percent chance they're in between those extremes.) With the odds probably not in your favor. ๐Ÿ˜„

Not so much about testing, but one time I really needed to get to my backups I lost password to the repository (I'm using restic). Luckily a copy of it was stored in bitwarden, but until I remembered it, were perhaps one of the worst moments.

Needless to say, please test backups and store secrets in more then one place.

I have an unraid server which hosts an docker image of Duplicacy. It is paid though for the web interface. And it backs up to Backblaze B2. I have roughly 175GB backed up, for which I pay $0.87 a month.

This is almost my exact backup workflow, with another location in between. Duplicacy is great, highly recommend.

Paid for the web interface as well. I really like that it's super simple and just does it's job. That would be the one I'd also recommend.

Do you have other clients backing up to your unraid? Iโ€™m looking for a complete solution to backing up end user workstations (windows, Mac and Linux) to my unraid server then backing up my unraid server to something like wasabi, Amazon, backblaze, etc. Preferably a single solution.

Yes, I have another server automatically rsyncing important config files to a nfs share. And my pc has a samba share where I manually backup files to.

Look into Veeam. The free version should be enough for this workflow.

rsync.net is great if you need something simple and cheap. Backblaze B2 is also decent, but does have the typical download and API usage cost.

I had never heard of rsync.net until now. I like the idea but it seems more expensive than B2. $15/TB vs $5/TB. Am I doing the math wrong or reading it wrong?

I've never heard of it either, but I came to the same conclusion as you

Yeah rsync.net has always been pricey.

I don't see it on their website right now, but they offer a discount if you're using something like restic/borg and only need scp/sftp access. Their support is also super friendly. I've had an account forever and got moved to the 100+ TB pricing even though I have < 50TB stored. YMMV but it doesn't hurt to ask if they have any additional discounts.

Also keep in mind that B2 charges for bandwidth too. It's $5/TB for storage, but $10/TB to download that same data.

Sure but backup is mostly data in (free on B2). Data out is rare, if ever.

If i wasnโ€™t backing up 12TB+ I would actually go with rsync for the features though.

Borgbase looks interesting, too.

I use rsync and backblaze b2.

I use it for version control and cost, about ยฃ2 for 750GB

Backblaze b2, borgbase.com. There are also programs like dejadup that will let you backup to popular cloud drives. The alternatives are limitless.

I use Restic + Resticprofile to back up everything and store it on my local HDD.

Then, I use Rclone to sync the local repository to Backblaze B2.

Here's my general setup:

/.config/restic/
โ”œโ”€โ”€ logs
โ”‚   โ”œโ”€โ”€ statuses
โ”‚   โ”‚   โ”œโ”€โ”€ restic-status-20230202T020202.json
โ”‚   โ”‚   โ””โ”€โ”€ restic-status-20230101T010101.json
โ”‚   โ”œโ”€โ”€ restic-check-20230202T020202.log
โ”‚   โ””โ”€โ”€ restic-backup-20230101T010101.log
โ”œโ”€โ”€ config
โ”‚   โ”œโ”€โ”€ profiles.yaml
โ”‚   โ”œโ”€โ”€ excludes.txt
โ”‚   โ”œโ”€โ”€ rclone.conf
โ”‚   โ””โ”€โ”€ password.txt
โ”œโ”€โ”€ bin
โ”‚   โ”œโ”€โ”€ restic_0.15.2_linux_arm64
โ”‚   โ”œโ”€โ”€ rclone_1.63.1_linux_arm64
โ”‚   โ””โ”€โ”€ resticprofile_0.22.0_linux_arm64
version: "1"

# Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events)
{{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }}       # Daily at 10PM
{{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }}    # Weekly at 4AM on Saturday
{{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }}     # Weekly at 11.30PM on Sunday
{{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday

# Directories
{{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }}
{{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }}
{{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }}
{{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }}
{{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }}
{{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }}
{{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }}
{{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }}
{{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }}
{{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }}
{{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }}

# Configs
{{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }}
{{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }}
{{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }}

global:
  default-command: snapshots                      # Run 'snapshots' when no command is specified
  initialize: false                               # Do not initialize a repository if none exists
  priority: low                                   # Use priority class on Windows and "nice" on Unixes
  min-memory: 100                                 # Minimum required RAM for Resticprofile to start
  restic-lock-retry-after: 5m                     # Retry failed restic command acquisition every 5 minutes
  restic-stale-lock-age: 10h                      # Unlock stale lock if age exceeds 10 hours
  restic-binary: '{{ $LOCATION_RESTIC_BINARY }}'  # Location of the Restic binary

default:
  lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}'      # Local lockfile to prevent concurrent profile runs
  force-inactive-lock: true                       # Detect and remove stale locks
  initialize: true                                # Initialize repository if it doesn't exist
  repository: '{{ $LOCATION_RESTIC_REPO }}'       # Path to Restic repository
  password-file: '{{ $CONFIG_RESTIC_PASSWORD }}'  # File containing repository password
  status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json'  # Output status file
  compression: 'max'                              # Maximum compression level
  run-after-fail:                                 # Block syncing if there was a failure. TODO: Add an email
    - 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}'

  backup:
    run-before:                                   # Bring down Docker before backup
      - 'systemctl stop docker.socket'
      - 'systemctl stop docker'
    run-finally:
      - 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log'  # Copy log file, stripping out any unchanced files
      - 'systemctl start docker'                  # Bring Docker back online after backup
    one-file-system: false                        # Exclude other file systems
    no-error-on-warning: true                     # Don't consider warnings as backup failures
    source:                                       # Directories to back up
      - '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}'
    exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns
    exclude-caches: true                          # Exclude cache files
    schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}'     # Backup schedule
    schedule-permission: system                   # Schedule permission
    schedule-lock-wait: 10m                       # Wait time for the lock during schedule
    schedule-log: '{{ tempFile "backup.log" }}'   # Log file to /tmp. This contains all information, including unchanged files which we do not care about
    verbose: 2                                    # Log details about processed files

  check:
    schedule: '{{ $SCHEDULE_RESTIC_CHECK }}'      # Verification schedule
    schedule-permission: system                   # Schedule permission
    schedule-lock-wait: 10m                       # Wait time for the lock during schedule
    schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log'  # Log file
    read-data: true                               # Verify data during check

  prune:
    dry-run: true                                 # Only prune if safe to do so, change manually
    repack-uncompressed: true                     # Repack all uncompressed data

  forget:
    dry-run: true                                 # Only forget if safe to do so, change manually

  rewrite:
    dry-run: true                                 # Only rewrite if safe to do so, change manually
    forget: true                                  # Remove original snapshots after creating new ones
    exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}'  # File containing exclude patterns

  mount:
    allow-other: true                             # Allow other users to access the mount point

  rebuild-index:
    read-all-packs: true                          # Read all pack files to generate new index from scratch

# The following shell profiles are simply to run other shell scripts at a scheduled time
# We do not actually run the primary Restic commands listed, as we exit the process early

shell-postgres:                                   # Profile to run shell scripts only. We exit the current process before Restic can run.
  backup:
    schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}'   # Postgres backup schedule
    schedule-permission: system                   # Schedule permission
    schedule-lock-mode: ignore                    # Ignore locks, if any
    schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log'  # Log file
    dry-run: true                                 # Don't write data
    run-before:                                   # Dump postgres databases
      - 'chmod 777 /var/run/docker.sock'
      - 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
      - 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
      - 'kill $$'

shell-sync:
  backup:
    schedule: '{{ $SCHEDULE_SYNC_BACKUP }}'       # Sync backup schedule
    schedule-permission: system                   # Schedule permission
    schedule-lock-mode: ignore                    # Ignore locks, if any
    schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log'  # Log file
    dry-run: true                                 # Don't write data
    run-before:                                   # Sync the Restic repo, after checking if the repository is in good health
      - 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi'
      - '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete'
      - '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}'
      - 'kill $$'

Resticprofile doesn't let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.

It's the first time I hear about resticprofile and it looks nice. So far I've been using crestic for configuration files. Do you know how they compare?

It seems like they have the same objectives - allow for easier configuration of Restic. I've never heard of Crestic until now. I'd say stick with what you're comfortable with

I use restic to backup my raspberry Pi's to my Synology NAS and backup my NAS to backblaze.

Restic or Kopia, both to Backblaze.

I second restic. Have been using it for a year now and have been generally very happy. Actually had to use it in a couple occasions to restore directory content and even recover a complete workstation drive. I have had relatively easy success in both scenarios.

I've always found them pretty similar. How'd you chose one or another?

I know Restic before Kopia and made a set of systemd units to run Restic backups on my home server and office workstation (both online 24/7).

Kopia seems much nicer for a regular user, so I use it on my and family laptops. I used to use Duplicati there, but that project seems dead.

Restic and then rclone to backblaze? Or is there a way to restic directly to backblaze?

I do prefer having a local copy of my backups (and therefore i use rclone), but afaik restic does support b2 directly...

+1 for backblaze. I use docker for everything and mounted volumes directly in the folder alongside a docker compose file. So I just tar my services directory with everything in it, and pipe it to rclone which connects to backblaze and has a "cat" feature so you can pipe data directly to the destination.

rsync.net and learn to use Borg; they're stupid cheap if you're technically proficient enough to handle the Borg setup yourself. Like, charge by the gigabyte, but it's 1.5ยข/GB at the most expensive, and cheaper in bulk

borg with an external hard drive and borgbase as a remote. I use the 2-2-1 rule (๐Ÿ™ˆ), as I struggle to find a good way to do another backup and RAID does not count ๐Ÿ˜ฌ

Backblaze B2 for automatic syncing of all the little files

Glacier for long term archiving of old big files that never change

I use SyncThing to backup our cell phones to my on-prem server, and then use BackBlaze Personal Backup for a cloud copy.

Veeam backup and replication at home and at work. At home a copy goes to a NAS, another copy goes to backblaze b2 currently.

  • restic > backblaze b2, nightly & automatic
  • restic > normally unplugged drive, every couple weeks (manual, recurring reminder)

I use Duplicati connected to Storj with data volumes that incrementally get backed up once per month. My files don't change very often, so monthly is a good balance. Not counting my Jellyfin library, those backups are around 1 TB. With the Jellyfin library, almost 15 TB.

Earlier this year, I recovered from a 100% data loss scenario, as I didn't (and still don't) have space for physical backups. I have a 25 TB allowance, so my actual cost was โ‚ฌ0. If I had to pay, it would have been under โ‚ฌ1.

Do you mean 25TB as the storj site says 25gb? Did some promotion give you that much free?

Definitely 25 TB. I've used the service for a long time, since before they accepted credit cards. I attached my credit card one day and got a bump to 25 TB. Since that happened, I pay basically nothing and my account is still 100% storj token funded.

Edit: I dug up screenshots I sent someone recently

my account is still 100% storj token funded

That seems to be the key bit, since everyone can use up to 25TB (if they can pay for it). Are you also hosting a node to earn credits tokens?

That looks like a cool setup, but I would never trust important data to some crypto shit (Storj) no matter what kind of track record they have.

That's fair. I'm 100% onboard the decentralisation train, and do my hardest to practice what I preach. In the event that the service does go bust, I can make a backup on a different S3 compatible service immediately as long as my working copy is intact. The likelihood of the backup service AND the working copy dying at the exact same time would be my cue to take up knitting.

I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.

I now have a Synology NAS, with 12TB in a RAID5 array (for a bit of disk redundancy). All my home devices, Proxmox servers etc back up here. The NAS also holds a few TB of media. Attached to it I have a USB hard drive (also 12TB). The NAS gets fully backed up to the USB drive nightly.

I also have a remote Raspberry Pi with a smaller USB drive (4TB) attached to it at my brother's house (in another country), where I backup most of the contents of my home NAS. I don't back up the media, just the important stuff. I might have to upgrade to a larger drive...

I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.

If it's the only copy, it's not a backup. It's the master.

I use wasabi s3, I back up to that using restic.

Backups and archived files go to my home server which then backups to backblaze b2.

My setup exactly, with the addition of using M-Discs to backup my core important stuff.

Duplicati to Backblaze B2 for the important stuff. For as far as the media library goes, no backup just local raid setup...

I do once a day rsync my data to another drive. I can restore a file, if I accidentaly deleted it. Important stuff goes encrypted via rclone additionaly to a hetzner storagebox.

Duplicati, to a friend's home server who lives in another town.

I hate to ask the scary question, but have you tried to restore your backups before? I used Duplicati and discovered that none of my backups were usable and ended up switching to Duplicacy.

An important question though.

I have, when I first set it up, and again once when I needed to.

It works just fine for me, but I've heared scary storries so now Im using:

  1. Kopia to backblaze b2 (all data)
  2. Kopia to local disk (all data)
  3. Duplicati to google drive (only 1 folder)

To back up my Synology: My first level is an old Synology, the second is Amazon Glacier.

I use OneDrive. Buy the Costco subscription and get like 15 months for around 110 CAD. GIVES 6 TB. I create some fake accountsink the sharing to my main account. I have an encrypted rxlone share for some things and others I GPG encryot the tar before sending it up. Been working fine for a couple years and I have multiple TB backed up.

AWS Glacier. I use the Synology plugin that does it automatically on a schedule.

https://aws.amazon.com/es/s3/storage-classes/glacier/

Their prices are ridicules if you add cost of outbound traffic.

But if not (for disaster recovery only) it is pretty cheap. Like 1$/TB/month.

I hope to never have to restore from there. Itโ€™s not something youโ€™re to do frequently.

I use nightly borg backup to a separate box and then that box uses rclone to back up the borg repo offsite. Before running the borg backup I export all databases and docker volumes so they get picked up.

Veeam backup and replication at home and at work.

Restic using resticprofile to configure and schedule backup runs.

I have been with idrive since 2009. At the time they were the only ones that allowed backups of network attached storage on their cheaper personal plans. Everyone else saw that as an "enterprise" feature which required a business plan. Which was bullsh*t, because lots of home NAS devices were being sold.

Anyway, I haven't done a recent comparison of services, but I remain happy with idrive.

Thesedays I no longer backup on a computer with a mapped drive, but directly from my NAS which runs the idrive software.

I had a catastrophic dual drive failure a few years ago, one failed and another failed during the raid rebuild! I was able to restore about 1tb of data and didn't lose anything important.

They also offer backup and restore by shipping a drive to you if you want to avoid the huge initial backup or a total restore, but I haven't used that feature.

They do also have a mobile app, but last time I tried it, it wasn't great.