Most server disasters do not look like disasters until it is too late. A misconfigured deployment overwrites your data. A failed update breaks your database. A compromised account deletes your files. In every one of these scenarios, the only thing standing between you and starting over is your last working backup.
The problem is that manual backups do not happen consistently. Life gets busy, deployments happen fast, and "I'll set up backups properly later" becomes a permanent state. Automated backups remove the human variable entirely.
This guide walks you through a production-grade backup setup using rsync and cron — two tools already available on virtually every Linux VPS, requiring zero additional software to install.
Why rsync is the right tool for VPS backups
Rsync (Remote Sync) is a file synchronization utility that only transfers changed data. On the first run it copies everything. On subsequent runs it copies only what has changed since the last backup. This makes it fast for daily runs and efficient on storage.
Other backup advantages of rsync:
- Preserves file permissions, timestamps, and symlinks
- Works over SSH for remote backups
- Resumable if a transfer is interrupted
- Widely supported and well-documented
For most Linux VPS workloads — web files, configuration, and databases — rsync covers your backup needs completely.
Step 1: Create the backup directory structure
bashsudo mkdir -p /opt/backups/{daily,weekly,logs} sudo chown -R $USER:$USER /opt/backups
This creates three directories:
daily/— timestamped daily snapshotsweekly/— weekly retained copieslogs/— backup run logs for review and troubleshooting
Keeping logs is important. When a backup fails silently, you want a record to audit.
Step 2: Write the backup script
bashnano ~/backup-site.sh
Paste this content:
bash#!/usr/bin/env bash set -euo pipefail STAMP="$(date +%F-%H%M)" SRC="/var/www" DEST="/opt/backups/daily/$STAMP" LOG="/opt/backups/logs/backup-$STAMP.log" mkdir -p "$DEST" echo "[$STAMP] Starting backup..." >> "$LOG" rsync -a --delete "$SRC/" "$DEST/" >> "$LOG" 2>&1 echo "[$STAMP] Backup complete." >> "$LOG" # Retain last 7 daily backups, remove older ones ls -1dt /opt/backups/daily/* | tail -n +8 | xargs -r rm -rf echo "[$STAMP] Retention cleanup done." >> "$LOG"
Make it executable:
bashchmod +x ~/backup-site.sh
What each part does:
set -euo pipefail— stops the script immediately if any command fails, preventing partial backups that look successfulSTAMP— timestamp used for directory naming and log entriesrsync -a --delete— archive mode (preserves metadata) + removes files from backup that were deleted from source- The retention block — keeps 7 daily backups and removes older ones automatically
Step 3: Run it manually first and verify
Never schedule a script you have not manually verified:
bash~/backup-site.sh ls -lah /opt/backups/daily/ cat /opt/backups/logs/*.log
Confirm the backup directory exists, file counts look right, and the log shows no errors. Only proceed to scheduling after a clean manual run.
Step 4: Schedule with cron
bashcrontab -e
Add this line:
cron15 2 * * * /home/ubuntu/backup-site.sh
This runs the backup every day at 2:15 AM server time — typically the lowest-traffic window for most applications. Adjust the time based on your traffic patterns and server timezone.
To verify cron picked up the schedule:
bashcrontab -l
Step 5: Add database backups
File backups are not enough if your application uses a database. Add a database dump to your script or run it as a separate cron entry.
MySQL / MariaDB:
bashmysqldump -u backupuser -p'your_password' --all-databases | gzip > /opt/backups/daily/db-$(date +%F).sql.gz
PostgreSQL:
bashpg_dumpall -U postgres | gzip > /opt/backups/daily/db-$(date +%F-%H%M).sql.gz
Security note: Do not hardcode credentials in production scripts. Use MySQL's .my.cnf config file or environment variables to pass credentials securely:
ini# ~/.my.cnf [client] user=backupuser password=your_password
Then the dump command becomes:
bashmysqldump --all-databases | gzip > /opt/backups/daily/db-$(date +%F).sql.gz
Step 6: Add a weekly retention layer
Daily backups cover recent history. For longer recovery windows, add a weekly copy:
bashcrontab -e
Add:
cron30 3 * * 0 cp -al /opt/backups/daily/$(ls -1t /opt/backups/daily/ | head -1) /opt/backups/weekly/weekly-$(date +%F)
This runs every Sunday at 3:30 AM and hard-links the most recent daily backup into a weekly folder. Hard links save disk space — they reference the same data blocks rather than duplicating files.
Keep 4 weekly backups (one month):
bash# Add to crontab 35 3 * * 0 ls -1dt /opt/backups/weekly/* | tail -n +5 | xargs -r rm -rf
The restore test — do not skip this
A backup that has never been tested is not a backup — it is an assumption. Once your automation is running, perform a restore test:
File restore:
bashcp /opt/backups/daily/2026-04-09-0215/var/www/html/index.php /tmp/restore-test.php diff /tmp/restore-test.php /var/www/html/index.php
Database restore (on staging, never production directly):
bashgunzip < /opt/backups/daily/db-2026-04-09.sql.gz | mysql -u root -p
Confirm your application boots correctly after a database restore. Run this test quarterly at minimum — every month if your application is business-critical.
Off-server backups matter
Backups stored only on the same VPS protect you from application errors and accidental deletions. They do not protect you from hardware failure, datacenter incidents, or account-level compromises.
For full protection, replicate backups to an off-server location:
bash# Sync to a remote backup server via SSH rsync -az --delete /opt/backups/ [email protected]:/backups/yourserver/
Cloud object storage (like S3-compatible services) is another option — cheaper for storage, accessible from anywhere, and independent of your VPS provider.
Final recommendation
Automate first, then refine. A simple daily rsync + cron setup running and tested is worth far more than a complex backup architecture that exists only as a plan. Get the basics running today, verify the restore works, then add off-server replication when you are ready.
The day you need a backup is always the worst day to discover it did not run.











Discussion
Have a question or tip about this topic? Share it below — your comment will appear after review.