“Scheduled Database Backups with n8n” — self-hosted automation for WordPress/Laravel sites

Picture this: It's 2 AM and your WordPress site's database gets corrupted. Or maybe a Laravel application update goes wrong and overwrites critical data. Without recent backups, you're looking at hours or days of lost work and angry customers. The worst part? This disaster could have been prevented with automated database backups that run quietly …

Picture this: It’s 2 AM and your WordPress site’s database gets corrupted. Or maybe a Laravel application update goes wrong and overwrites critical data. Without recent backups, you’re looking at hours or days of lost work and angry customers. The worst part? This disaster could have been prevented with automated database backups that run quietly in the background.

This guide shows you how to set up reliable, automated database backups using n8n—a powerful, self-hosted automation platform. You’ll learn multiple approaches to backing up WordPress and Laravel databases, from simple daily snapshots to sophisticated multi-location backup strategies. Each method includes clear implementation steps and practical tips from real-world deployments.

Who This Guide Is For: Developers and site administrators managing WordPress or Laravel applications who want to implement reliable backup automation without relying on expensive third-party services.

Method 1: Basic MySQL Dump Automation

The MySQL dump approach creates complete database snapshots at scheduled intervals. This method works perfectly for smaller databases (under 1GB) and provides straightforward restore options when needed.

How to implement it:

  1. Install n8n on your server using Docker: docker run -it --rm --name n8n -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n
  2. Create a new workflow and add a Cron node set to your desired schedule (e.g., “0 2 * * *” for 2 AM daily)
  3. Add an Execute Command node with this mysqldump command:
    mysqldump -u [username] -p[password] [database_name] > /backups/db_$(date +%Y%m%d_%H%M%S).sql
  4. Connect a Compression node to create a .gz file from the SQL dump
  5. Add your preferred storage node (FTP, S3, Google Drive) to save the compressed backup
  6. Include an Email node to send success/failure notifications

Tips:

  • Store database credentials in n8n’s credential manager rather than hardcoding them
  • Add --single-transaction flag for InnoDB tables to ensure data consistency
  • Test restore procedures monthly to verify backup integrity

What to expect: This setup typically completes backups in 2-5 minutes for databases under 500MB. You’ll have daily snapshots stored securely off-site, with email alerts confirming each successful backup or warning of any failures.

Method 2: WordPress-Specific Backup with WP-CLI

WP-CLI provides WordPress-aware backup capabilities that handle database exports alongside plugin and theme configurations. This method ensures you capture WordPress-specific data structures correctly.

How to implement it:

  1. Ensure WP-CLI is installed on your server (curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar)
  2. In n8n, create a workflow starting with a Schedule Trigger node
  3. Add an Execute Command node with:
    cd /var/www/wordpress && wp db export --add-drop-table /backups/wp_backup_$(date +%Y%m%d).sql
  4. Add a second Execute Command node to backup wp-content:
    tar -czf /backups/wp_content_$(date +%Y%m%d).tar.gz /var/www/wordpress/wp-content/
  5. Use an S3 or FTP node to upload both files to remote storage
  6. Configure a Webhook node to send status updates to Slack or Discord

Tips:

  • Exclude cache directories from wp-content backups to reduce file size
  • Run wp db optimize before export to clean up overhead
  • Consider using --exclude-tables for temporary data tables

Code example for excluding cache directories:

tar -czf backup.tar.gz --exclude='*/cache/*' --exclude='*/w3tc-config/*' /var/www/wordpress/wp-content/

What to expect: WordPress backups including both database and file content typically range from 100MB to several GB. The process runs unattended and includes both your content and configuration, making full site restoration straightforward.

Article illustration

Method 3: Laravel Backup Package Integration

Laravel applications benefit from using the spatie/laravel-backup package combined with n8n orchestration. This approach leverages Laravel’s built-in features while adding powerful automation capabilities.

How to implement it:

  1. Install the backup package in your Laravel project: composer require spatie/laravel-backup
  2. Publish and configure the backup config: php artisan vendor:publish --provider="SpatieBackupBackupServiceProvider"
  3. In n8n, create an HTTP Request node that triggers your Laravel backup endpoint
  4. Set up the endpoint in routes/api.php:
    Route::post('/trigger-backup', function () {
        Artisan::call('backup:run');
        return response()->json(['status' => 'backup started']);
    });
  5. Add authentication to the HTTP Request node using API tokens
  6. Create conditional nodes in n8n to handle success/failure scenarios
  7. Connect notification nodes for monitoring backup status

Tips:

  • Configure multiple disk destinations in config/backup.php for redundancy
  • Set up cleanup policies to automatically remove old backups
  • Monitor backup sizes to catch unexpected growth

What to expect: Laravel’s backup package creates comprehensive snapshots including database, files, and configuration. Backups complete in 5-15 minutes depending on application size, with automatic cleanup preventing storage overflow.

Method 4: Incremental Backup Strategy

For larger databases or sites with frequent changes, incremental backups reduce storage requirements and backup windows. This method captures only changes since the last full backup.

How to implement it:

  1. Set up binary logging in MySQL by adding to my.cnf: log_bin = /var/log/mysql/mysql-bin.log
  2. Create two n8n workflows: one for weekly full backups, another for daily incrementals
  3. In the incremental workflow, use mysqlbinlog to capture changes:
    mysqlbinlog --start-datetime="$(date -d 'yesterday' +%Y-%m-%d)" /var/log/mysql/mysql-bin.* > /backups/incremental_$(date +%Y%m%d).sql
  4. Add logic nodes to track the last full backup timestamp
  5. Store metadata in a JSON file for restoration sequencing
  6. Implement automated testing of incremental restore procedures

Tips:

  • Rotate binary logs after successful incremental backups
  • Document the restoration sequence clearly for emergency use
  • Test full restoration quarterly using backup data

Code example for tracking backup metadata:

{
  "last_full_backup": "2024-01-15T02:00:00Z",
  "incremental_backups": [
    {"date": "2024-01-16", "file": "incremental_20240116.sql"},
    {"date": "2024-01-17", "file": "incremental_20240117.sql"}
  ]
}

What to expect: Incremental backups typically use 10-20% of the storage required for full backups. Daily incrementals complete in minutes rather than the longer duration needed for full dumps, while still providing complete restoration capabilities.

Article illustration

Method 5: Multi-Destination Backup with Verification

Critical applications need backups stored in multiple locations with automated verification. This method implements the 3-2-1 backup rule using n8n’s parallel processing capabilities.

How to implement it:

  1. Design your n8n workflow with parallel branches after the backup creation
  2. Configure three storage destinations: local NAS, cloud storage (S3), and remote FTP
  3. After the MySQL dump node, add a Fork node to split the workflow
  4. Connect each branch to different storage nodes (AWS S3, Google Drive, FTP)
  5. Add verification nodes that check file sizes and checksums:
    md5sum /backups/latest.sql > /backups/latest.sql.md5
  6. Merge branches with a Merge node configured for “Wait for all inputs”
  7. Create a summary report comparing all backup locations
  8. Set up restoration tests using a staging environment

Tips:

  • Use different retention policies for each storage location
  • Implement bandwidth throttling for large transfers
  • Monitor storage costs and optimize compression settings

What to expect: This comprehensive approach ensures maximum data protection with backups verified across multiple locations. While the process takes longer due to multiple transfers, you’ll have bulletproof disaster recovery capabilities with automated verification confirming backup integrity.

Frequently Asked Questions

Q: How much storage space do I need for database backups?

Plan for 3-5 times your current database size to accommodate multiple backup versions. A 1GB database typically needs 3-5GB when keeping weekly fulls and daily incrementals for a month. Compression reduces this by 60-80%, so actual storage might be 1-2GB.

Q: Can n8n handle backup encryption for sensitive data?

Yes, n8n integrates with encryption tools through Execute Command nodes. Add GPG encryption after creating backups: gpg --encrypt --recipient backup@example.com backup.sql. Store encryption keys securely and test decryption regularly.

Q: What’s the best schedule for WordPress/Laravel backups?

For most sites, daily backups at 2-4 AM work well. E-commerce sites benefit from twice-daily backups (2 AM and 2 PM). Development sites might only need weekly backups. Adjust based on how much data you can afford to lose.

Q: How do I test if my backups actually work?

Create a staging environment and automate monthly restore tests. Use n8n to restore the latest backup to staging, run basic functionality checks, and report results. This catches issues before you need backups in an emergency.

Q: Should I backup n8n workflows themselves?

Absolutely. Export n8n workflows regularly using the CLI: n8n export:workflow --all --output=n8n-backup.json. Include this in your backup routine to preserve your automation infrastructure.

Q: Can I use n8n’s cloud version instead of self-hosting?

While n8n cloud works for many automations, database backups often require direct server access. Self-hosted n8n on the same network as your databases provides better security and performance for backup operations.

Conclusion

Automated database backups transform a critical but often-neglected task into a reliable, hands-off process. Whether you choose simple MySQL dumps or sophisticated incremental strategies, n8n provides the flexibility to match your specific needs. Start with Method 1 for basic protection, then gradually implement additional methods as your requirements grow. For comprehensive backup strategies, check out our automation services for more complex implementations.

Remember that the best backup system is one that runs reliably without intervention. Take time to set up monitoring and verification—your future self will thank you when disaster strikes and restoration just works. Regular testing and documentation ensure your backups serve their purpose when needed most.

Need help implementing these workflows? WorkflowDone.com specializes in automation solutions. Contact us at hi@workflowdone.com for a free consultation.

Temo Berishvili

Temo Berishvili

Founder of Workflowdone.com