Backup Strategy Implementation - Complete Guide
Published: September 25, 2024 | Reading time: 20 minutes
Backup Strategy Overview
Comprehensive backup strategy ensures data protection and business continuity:
Backup Benefits
# Backup Benefits
- Data protection
- Disaster recovery
- Business continuity
- Compliance requirements
- Version control
- Point-in-time recovery
- Risk mitigation
Backup Types
Backup Categories
Backup Types
- Full backup
- Incremental backup
- Differential backup
- Mirror backup
Storage Locations
- Local storage
- Network storage
- Cloud storage
- Offsite storage
Database Backups
MySQL Backup Strategy
MySQL Backup Implementation
# MySQL Backup Script
#!/bin/bash
# mysql-backup.sh
# Configuration
DB_HOST="localhost"
DB_USER="backup_user"
DB_PASS="secure_password"
DB_NAME="myapp"
BACKUP_DIR="/var/backups/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
# Create backup directory
mkdir -p $BACKUP_DIR
# Full backup
mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS \
--single-transaction \
--routines \
--triggers \
--events \
--hex-blob \
--add-drop-database \
--add-drop-table \
$DB_NAME > $BACKUP_DIR/full_backup_$DATE.sql
# Compress backup
gzip $BACKUP_DIR/full_backup_$DATE.sql
# Incremental backup (binary log)
mysqlbinlog --start-datetime="$(date -d '1 day ago' '+%Y-%m-%d %H:%M:%S')" \
/var/log/mysql/mysql-bin.* > $BACKUP_DIR/incremental_backup_$DATE.sql
# Compress incremental backup
gzip $BACKUP_DIR/incremental_backup_$DATE.sql
# Upload to cloud storage
aws s3 cp $BACKUP_DIR/full_backup_$DATE.sql.gz s3://my-backup-bucket/mysql/
aws s3 cp $BACKUP_DIR/incremental_backup_$DATE.sql.gz s3://my-backup-bucket/mysql/
# Clean old backups
find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
# Verify backup
if [ -f "$BACKUP_DIR/full_backup_$DATE.sql.gz" ]; then
echo "Backup completed successfully: full_backup_$DATE.sql.gz"
else
echo "Backup failed!"
exit 1
fi
# Set up cron job
# 0 2 * * * /path/to/mysql-backup.sh
# MySQL Configuration for Backups
# /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
# Enable binary logging
log-bin = /var/log/mysql/mysql-bin.log
binlog-format = ROW
expire_logs_days = 7
max_binlog_size = 100M
# Backup user creation
CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'secure_password';
GRANT SELECT, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;
PostgreSQL Backup Strategy
PostgreSQL Backup Implementation
# PostgreSQL Backup Script
#!/bin/bash
# postgres-backup.sh
# Configuration
DB_HOST="localhost"
DB_USER="postgres"
DB_NAME="myapp"
BACKUP_DIR="/var/backups/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
# Create backup directory
mkdir -p $BACKUP_DIR
# Full backup using pg_dump
pg_dump -h $DB_HOST -U $DB_USER \
--verbose \
--clean \
--no-owner \
--no-privileges \
--format=custom \
--file=$BACKUP_DIR/full_backup_$DATE.dump \
$DB_NAME
# Compress backup
gzip $BACKUP_DIR/full_backup_$DATE.dump
# WAL archiving backup
# Configure PostgreSQL for WAL archiving
# postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/backups/postgresql/wal/%f'
# Create WAL directory
mkdir -p $BACKUP_DIR/wal
# Base backup
pg_basebackup -h $DB_HOST -U $DB_USER \
-D $BACKUP_DIR/base_backup_$DATE \
-Ft \
-z \
-P
# Upload to cloud storage
aws s3 sync $BACKUP_DIR/ s3://my-backup-bucket/postgresql/ --delete
# Clean old backups
find $BACKUP_DIR -name "*.dump.gz" -mtime +$RETENTION_DAYS -delete
find $BACKUP_DIR -name "base_backup_*" -mtime +$RETENTION_DAYS -exec rm -rf {} \;
# Verify backup
if [ -f "$BACKUP_DIR/full_backup_$DATE.dump.gz" ]; then
echo "PostgreSQL backup completed successfully: full_backup_$DATE.dump.gz"
else
echo "PostgreSQL backup failed!"
exit 1
fi
# Recovery testing
# pg_restore -h localhost -U postgres -d test_db /var/backups/postgresql/full_backup_20240101_020000.dump.gz
File System Backups
rsync Backup Strategy
rsync Backup Implementation
# rsync Backup Script
#!/bin/bash
# rsync-backup.sh
# Configuration
SOURCE_DIR="/var/www"
BACKUP_DIR="/var/backups/filesystem"
REMOTE_HOST="backup-server.example.com"
REMOTE_USER="backup"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
# Create backup directory
mkdir -p $BACKUP_DIR
# Local backup
rsync -avz \
--delete \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
--exclude=".git/" \
$SOURCE_DIR/ $BACKUP_DIR/current/
# Create snapshot
cp -al $BACKUP_DIR/current $BACKUP_DIR/snapshot_$DATE
# Remote backup
rsync -avz \
--delete \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
--exclude=".git/" \
-e "ssh -i /home/backup/.ssh/backup_key" \
$SOURCE_DIR/ $REMOTE_USER@$REMOTE_HOST:/backups/filesystem/
# Cloud backup using rclone
rclone sync $SOURCE_DIR/ remote:backups/filesystem/ \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
--exclude=".git/" \
--progress \
--log-level INFO
# Clean old snapshots
find $BACKUP_DIR -name "snapshot_*" -mtime +$RETENTION_DAYS -exec rm -rf {} \;
# Verify backup
if [ -d "$BACKUP_DIR/snapshot_$DATE" ]; then
echo "File system backup completed successfully: snapshot_$DATE"
else
echo "File system backup failed!"
exit 1
fi
# rsync with hard links for space efficiency
rsync -avz \
--link-dest=$BACKUP_DIR/current \
--exclude="*.log" \
--exclude="*.tmp" \
$SOURCE_DIR/ $BACKUP_DIR/backup_$DATE/
# Update current symlink
ln -sfn $BACKUP_DIR/backup_$DATE $BACKUP_DIR/current
Cloud Backup Solutions
AWS S3 Backup
AWS S3 Backup Implementation
# AWS S3 Backup Script
#!/bin/bash
# aws-s3-backup.sh
# Configuration
BUCKET_NAME="my-backup-bucket"
BACKUP_DIR="/var/backups"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
# Create backup archive
tar -czf $BACKUP_DIR/backup_$DATE.tar.gz \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
/var/www /etc /home
# Upload to S3
aws s3 cp $BACKUP_DIR/backup_$DATE.tar.gz s3://$BUCKET_NAME/backups/
# Set lifecycle policy
aws s3api put-bucket-lifecycle-configuration \
--bucket $BUCKET_NAME \
--lifecycle-configuration '{
"Rules": [
{
"ID": "BackupLifecycle",
"Status": "Enabled",
"Filter": {
"Prefix": "backups/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555
}
}
]
}'
# Enable versioning
aws s3api put-bucket-versioning \
--bucket $BUCKET_NAME \
--versioning-configuration Status=Enabled
# Enable server-side encryption
aws s3api put-bucket-encryption \
--bucket $BUCKET_NAME \
--server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}
]
}'
# Cross-region replication
aws s3api put-bucket-replication \
--bucket $BUCKET_NAME \
--replication-configuration '{
"Role": "arn:aws:iam::account:role/replication-role",
"Rules": [
{
"ID": "ReplicateToSecondaryRegion",
"Status": "Enabled",
"Prefix": "backups/",
"Destination": {
"Bucket": "arn:aws:s3:::my-backup-bucket-secondary",
"StorageClass": "STANDARD"
}
}
]
}'
# Clean local backup
rm $BACKUP_DIR/backup_$DATE.tar.gz
Google Cloud Storage Backup
GCP Storage Backup
# Google Cloud Storage Backup
#!/bin/bash
# gcp-backup.sh
# Configuration
BUCKET_NAME="my-backup-bucket"
BACKUP_DIR="/var/backups"
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup archive
tar -czf $BACKUP_DIR/backup_$DATE.tar.gz \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
/var/www /etc /home
# Upload to GCS
gsutil cp $BACKUP_DIR/backup_$DATE.tar.gz gs://$BUCKET_NAME/backups/
# Set lifecycle policy
gsutil lifecycle set lifecycle.json gs://$BUCKET_NAME
# lifecycle.json
{
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age": 30
}
},
{
"action": {
"type": "SetStorageClass",
"storageClass": "COLDLINE"
},
"condition": {
"age": 90
}
},
{
"action": {
"type": "SetStorageClass",
"storageClass": "ARCHIVE"
},
"condition": {
"age": 365
}
},
{
"action": {
"type": "Delete"
},
"condition": {
"age": 2555
}
}
]
}
# Enable versioning
gsutil versioning set on gs://$BUCKET_NAME
# Set default encryption
gsutil kms encryption \
-k projects/my-project/locations/global/keyRings/my-key-ring/cryptoKeys/my-key \
gs://$BUCKET_NAME
# Cross-region replication
gsutil cors set cors.json gs://$BUCKET_NAME
# Clean local backup
rm $BACKUP_DIR/backup_$DATE.tar.gz
Backup Automation
Automated Backup System
Backup Automation Setup
# Comprehensive Backup System
#!/bin/bash
# backup-system.sh
# Configuration
BACKUP_DIR="/var/backups"
LOG_FILE="/var/log/backup.log"
EMAIL="admin@example.com"
DATE=$(date +%Y%m%d_%H%M%S)
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG_FILE
}
# Error handling
error_exit() {
log "ERROR: $1"
echo "Backup failed: $1" | mail -s "Backup Failed" $EMAIL
exit 1
}
# Success notification
success_notification() {
log "SUCCESS: $1"
echo "Backup completed successfully: $1" | mail -s "Backup Success" $EMAIL
}
# Database backup
backup_database() {
log "Starting database backup..."
# MySQL backup
if systemctl is-active --quiet mysql; then
mysqldump -u backup_user -p$DB_PASS --all-databases > $BACKUP_DIR/mysql_backup_$DATE.sql
gzip $BACKUP_DIR/mysql_backup_$DATE.sql
log "MySQL backup completed"
fi
# PostgreSQL backup
if systemctl is-active --quiet postgresql; then
pg_dumpall -U postgres > $BACKUP_DIR/postgresql_backup_$DATE.sql
gzip $BACKUP_DIR/postgresql_backup_$DATE.sql
log "PostgreSQL backup completed"
fi
}
# File system backup
backup_filesystem() {
log "Starting file system backup..."
# Create tar archive
tar -czf $BACKUP_DIR/filesystem_backup_$DATE.tar.gz \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude="node_modules/" \
--exclude=".git/" \
/var/www /etc /home /opt
log "File system backup completed"
}
# Cloud upload
upload_to_cloud() {
log "Uploading backups to cloud..."
# AWS S3 upload
if command -v aws &> /dev/null; then
aws s3 sync $BACKUP_DIR/ s3://my-backup-bucket/backups/ --delete
log "AWS S3 upload completed"
fi
# Google Cloud Storage upload
if command -v gsutil &> /dev/null; then
gsutil -m rsync -r $BACKUP_DIR/ gs://my-backup-bucket/backups/
log "GCS upload completed"
fi
}
# Cleanup old backups
cleanup_backups() {
log "Cleaning up old backups..."
# Keep backups for 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
find $BACKUP_DIR -name "*.tar.gz" -mtime +30 -delete
log "Cleanup completed"
}
# Verify backup integrity
verify_backup() {
log "Verifying backup integrity..."
# Check if backup files exist and are not empty
for file in $BACKUP_DIR/*_$DATE.*; do
if [ ! -s "$file" ]; then
error_exit "Backup file $file is empty or missing"
fi
done
log "Backup verification completed"
}
# Main backup process
main() {
log "Starting backup process..."
# Create backup directory
mkdir -p $BACKUP_DIR
# Run backup tasks
backup_database || error_exit "Database backup failed"
backup_filesystem || error_exit "File system backup failed"
verify_backup || error_exit "Backup verification failed"
upload_to_cloud || error_exit "Cloud upload failed"
cleanup_backups || error_exit "Cleanup failed"
success_notification "All backups completed successfully"
log "Backup process completed"
}
# Run main function
main "$@"
Disaster Recovery
Recovery Procedures
Disaster Recovery Plan
# Disaster Recovery Script
#!/bin/bash
# disaster-recovery.sh
# Configuration
BACKUP_DIR="/var/backups"
RECOVERY_DIR="/var/recovery"
LOG_FILE="/var/log/recovery.log"
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG_FILE
}
# Database recovery
recover_database() {
log "Starting database recovery..."
# MySQL recovery
if [ -f "$BACKUP_DIR/mysql_backup_$1.sql.gz" ]; then
gunzip -c $BACKUP_DIR/mysql_backup_$1.sql.gz | mysql -u root -p
log "MySQL recovery completed"
fi
# PostgreSQL recovery
if [ -f "$BACKUP_DIR/postgresql_backup_$1.sql.gz" ]; then
gunzip -c $BACKUP_DIR/postgresql_backup_$1.sql.gz | psql -U postgres
log "PostgreSQL recovery completed"
fi
}
# File system recovery
recover_filesystem() {
log "Starting file system recovery..."
if [ -f "$BACKUP_DIR/filesystem_backup_$1.tar.gz" ]; then
# Create recovery directory
mkdir -p $RECOVERY_DIR
# Extract backup
tar -xzf $BACKUP_DIR/filesystem_backup_$1.tar.gz -C $RECOVERY_DIR
# Restore files
cp -r $RECOVERY_DIR/var/www/* /var/www/
cp -r $RECOVERY_DIR/etc/* /etc/
cp -r $RECOVERY_DIR/home/* /home/
log "File system recovery completed"
fi
}
# Cloud recovery
recover_from_cloud() {
log "Starting cloud recovery..."
# Download from AWS S3
if command -v aws &> /dev/null; then
aws s3 sync s3://my-backup-bucket/backups/ $BACKUP_DIR/
log "AWS S3 recovery completed"
fi
# Download from Google Cloud Storage
if command -v gsutil &> /dev/null; then
gsutil -m rsync -r gs://my-backup-bucket/backups/ $BACKUP_DIR/
log "GCS recovery completed"
fi
}
# Service restart
restart_services() {
log "Restarting services..."
# Restart web server
systemctl restart nginx
systemctl restart apache2
# Restart database services
systemctl restart mysql
systemctl restart postgresql
# Restart application services
systemctl restart myapp
log "Services restarted"
}
# Main recovery process
main() {
local backup_date=$1
if [ -z "$backup_date" ]; then
echo "Usage: $0 "
echo "Example: $0 20240101_020000"
exit 1
fi
log "Starting disaster recovery process for backup: $backup_date"
# Create recovery directory
mkdir -p $RECOVERY_DIR
# Run recovery tasks
recover_from_cloud || log "Cloud recovery failed"
recover_database $backup_date || log "Database recovery failed"
recover_filesystem $backup_date || log "File system recovery failed"
restart_services || log "Service restart failed"
log "Disaster recovery process completed"
}
# Run main function
main "$@"
Backup Monitoring
Backup Monitoring System
Backup Monitoring Setup
# Backup Monitoring Script
#!/bin/bash
# backup-monitor.sh
# Configuration
BACKUP_DIR="/var/backups"
LOG_FILE="/var/log/backup-monitor.log"
EMAIL="admin@example.com"
ALERT_THRESHOLD=24 # hours
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG_FILE
}
# Check backup age
check_backup_age() {
local backup_file=$1
local max_age_hours=$2
if [ ! -f "$backup_file" ]; then
return 1
fi
local file_age=$(($(date +%s) - $(stat -c %Y "$backup_file")))
local max_age_seconds=$((max_age_hours * 3600))
if [ $file_age -gt $max_age_seconds ]; then
return 1
fi
return 0
}
# Check backup size
check_backup_size() {
local backup_file=$1
local min_size_mb=$2
if [ ! -f "$backup_file" ]; then
return 1
fi
local file_size=$(stat -c %s "$backup_file")
local min_size_bytes=$((min_size_mb * 1024 * 1024))
if [ $file_size -lt $min_size_bytes ]; then
return 1
fi
return 0
}
# Send alert
send_alert() {
local subject=$1
local message=$2
echo "$message" | mail -s "$subject" $EMAIL
log "Alert sent: $subject"
}
# Monitor database backups
monitor_database_backups() {
log "Monitoring database backups..."
# Check MySQL backup
local mysql_backup=$(ls -t $BACKUP_DIR/mysql_backup_*.sql.gz 2>/dev/null | head -1)
if [ -n "$mysql_backup" ]; then
if ! check_backup_age "$mysql_backup" $ALERT_THRESHOLD; then
send_alert "MySQL Backup Alert" "MySQL backup is older than $ALERT_THRESHOLD hours: $mysql_backup"
fi
if ! check_backup_size "$mysql_backup" 1; then
send_alert "MySQL Backup Alert" "MySQL backup is too small: $mysql_backup"
fi
else
send_alert "MySQL Backup Alert" "No MySQL backup found"
fi
# Check PostgreSQL backup
local postgresql_backup=$(ls -t $BACKUP_DIR/postgresql_backup_*.sql.gz 2>/dev/null | head -1)
if [ -n "$postgresql_backup" ]; then
if ! check_backup_age "$postgresql_backup" $ALERT_THRESHOLD; then
send_alert "PostgreSQL Backup Alert" "PostgreSQL backup is older than $ALERT_THRESHOLD hours: $postgresql_backup"
fi
if ! check_backup_size "$postgresql_backup" 1; then
send_alert "PostgreSQL Backup Alert" "PostgreSQL backup is too small: $postgresql_backup"
fi
else
send_alert "PostgreSQL Backup Alert" "No PostgreSQL backup found"
fi
}
# Monitor file system backups
monitor_filesystem_backups() {
log "Monitoring file system backups..."
local filesystem_backup=$(ls -t $BACKUP_DIR/filesystem_backup_*.tar.gz 2>/dev/null | head -1)
if [ -n "$filesystem_backup" ]; then
if ! check_backup_age "$filesystem_backup" $ALERT_THRESHOLD; then
send_alert "File System Backup Alert" "File system backup is older than $ALERT_THRESHOLD hours: $filesystem_backup"
fi
if ! check_backup_size "$filesystem_backup" 100; then
send_alert "File System Backup Alert" "File system backup is too small: $filesystem_backup"
fi
else
send_alert "File System Backup Alert" "No file system backup found"
fi
}
# Monitor cloud backups
monitor_cloud_backups() {
log "Monitoring cloud backups..."
# Check AWS S3
if command -v aws &> /dev/null; then
local s3_count=$(aws s3 ls s3://my-backup-bucket/backups/ --recursive | wc -l)
if [ $s3_count -eq 0 ]; then
send_alert "Cloud Backup Alert" "No backups found in AWS S3"
fi
fi
# Check Google Cloud Storage
if command -v gsutil &> /dev/null; then
local gcs_count=$(gsutil ls gs://my-backup-bucket/backups/ | wc -l)
if [ $gcs_count -eq 0 ]; then
send_alert "Cloud Backup Alert" "No backups found in Google Cloud Storage"
fi
fi
}
# Main monitoring process
main() {
log "Starting backup monitoring..."
monitor_database_backups
monitor_filesystem_backups
monitor_cloud_backups
log "Backup monitoring completed"
}
# Run main function
main "$@"
Best Practices
Backup Strategy
Backup Best Practices
- Follow 3-2-1 rule
- Test recovery procedures
- Monitor backup success
- Use encryption
- Implement retention policies
- Document procedures
- Regular testing
Common Mistakes
- No backup testing
- Single backup location
- No monitoring
- Poor retention policies
- No encryption
- Inadequate documentation
- No disaster recovery plan
Summary
Backup strategy implementation involves several key components:
- Backup Types: Full, incremental, differential backups
- Database Backups: MySQL, PostgreSQL backup strategies
- File System: rsync, tar-based backups
- Cloud Storage: AWS S3, Google Cloud Storage
- Automation: Automated backup systems
- Disaster Recovery: Recovery procedures, testing
- Monitoring: Backup monitoring, alerting
- Best Practices: 3-2-1 rule, testing, documentation
Need More Help?
Struggling with backup strategy implementation or need help setting up disaster recovery? Our infrastructure experts can help you implement robust backup solutions.
Get Backup Help