I’ve been wanting to update my backup solution, right now things are just synced over to a rented server on OVH (Kimsufi) and Google Drive as I don’t have too much data to backup right now that I can’t live without, but these backups aren’t encrypted and I’d like them to be. Along the line I ran into Wasabi (https://wasabi.com/), they’ve got cheap object storage and the speeds were incredible, so I decided to setup backups with them.
Note: Wasabi holds a minimum 90 day retention policy, so the moment you delete a file, you still pay for it for 90 days. It’s an annoying system, but for the price-point of $5.99 for 1 TB and no data transfer fees, it’s hard to complain.
Restic is a lightweight tool that can interface with the Object Storage (S3-compatible) APIs, and can encrypt files and transit them securely. This is ideal for when I want an offsite backup that’s fully encrypted both in transit and at rest, plus it’s got a very simple configuration.
For this tutorial, I’m running on Ubuntu 18.04, so I’ll use snap to install Restic, you can do this by running:
snap install restic --classic
Note: The classic flag is required as of the time of writing this tutorial.
Once we’ve got that installed, sign-in to your Object Storage provider (in this case, Wasabi) and create a bucket. My methodology is creating one bucket for the hypervisor disk images, and one bucket for important files (eg, all of Nextcloud files).
Creating our Restic Keys
Restic reads out environment variables, this is the easiest method to use, so let’s create a .restic-env file in our desired user directory and put the following content inside of it:
export AWS_ACCESS_KEY_ID="OBJECT_STORAGE_ACCESS_KEY" export AWS_SECRET_ACCESS_KEY="OBJECT_STORAGE_SECRET_KEY" export RESTIC_PASSWORD="ENCRYPTION PASSWORD - GENERATE LONG PASSWORD AND WRITE IT DOWN"
Note: I recommend using a password storage like KeePassX or Bitwarden to store the passwords securely. If you lose the encryption password, you will not be able to recover the files.
Automating our Backups
Now, we want to create a script which we’ll call “backup-files.sh”, this will run automatically via cron and backup our files routinely. You should be mindful that the task of uploading files will drain bandwidth, so if you have metered bandwidth, I’d suggest backing up only what you can’t live without.
#!/bin/bash set -e source ~/.restic-env export RESTIC_REPOSITORY="s3:s3.apihere.com/BUCKETHOSTNAME" echo "Starting backup" restic backup /root --exclude .cache --exclude .local restic backup /var/lib/vz/templates/ restic backup /your/folder/goes/here/ echo "Instructing S3 to delete files after certain period" restic forget --prune --keep-daily 7 --keep-weekly 4 --keep-monthly 12 echo "Finished backup"
This script is simply backing up the directories we care about, deleting old files from our Object Storage, and uploading the new files in their place. Next, I’ve dropped a crontab file in the user directory called “crontab”, it’s contents are:
0 2 * * * ionic -c2 -n7 nice -n19 bash /home/backupuser/backup-files.sh > /var/log/backup-status.log 2>&1
We’ve now got our backups setup successfully, that are easy on IO and it outputs to a log file in /var/log/backup-status.log.
That’s all we needed to do, restic will encrypt the files then send them to the S3 API automatically, our work here is done!
I hope you enjoyed the tutorial, feel free to leave a comment below with any questions or feedback, and I’d be happy to help!