Your self-hosted n8n instance probably stores more critical business data than you realize. Beyond the workflows themselves, it holds encrypted API keys for Stripe, HubSpot, databases, and email providers. It stores OAuth tokens for Google, Slack, and GitHub. It contains execution history with customer data that flowed through your automations. And it all depends on a single encryption key that, if lost, renders every stored credential permanently inaccessible.
Most self-hosting guides mention backups as an afterthought — "run pg_dump once a day and you are fine." That advice is incomplete. A proper n8n backup strategy protects five distinct components, automates the process so it actually runs, stores backups off the server so a disk failure does not take your backups with it, and gets tested regularly so you know the restore process works before you need it.
This guide gives you a complete, production-grade backup system for self-hosted n8n. Every script is copy-paste ready. If you have not set up n8n yet, start with our Docker deployment guide first, then come back here to protect what you have built.
What You Need to Back Up
n8n's data is spread across five components. Missing any one of them during a restore leaves you with an incomplete — or entirely broken — recovery.
| Component | Location | Contains | Risk If Lost |
|---|---|---|---|
| PostgreSQL database | Docker volume postgres_data |
Workflow definitions, credentials (encrypted), execution history, user accounts, settings | Total loss of all workflows and configuration |
| N8N_ENCRYPTION_KEY | .env file |
AES-256 key for credential encryption | All stored credentials become permanently unreadable |
| n8n data volume | Docker volume n8n_data |
Custom nodes, binary execution data, local file storage | Custom nodes and file-based data lost |
| Environment files | .env, docker-compose.yml, Caddyfile |
Service configuration, database passwords, domain settings | Must manually reconstruct configuration |
| Workflow JSON exports | n8n UI or API | Portable workflow definitions (without credentials) | Secondary backup — useful for migration or version control |
The PostgreSQL database is the most critical. It contains everything n8n needs to function — workflow definitions, node configurations, credential data (encrypted with the encryption key), execution logs, and user accounts. Without a valid database backup and the matching encryption key, you are starting from scratch.
PostgreSQL Backup with pg_dump
PostgreSQL's pg_dump utility creates a consistent, point-in-time snapshot of the entire database while n8n continues running. Unlike SQLite (which requires stopping the application or risking corruption), PostgreSQL handles concurrent reads and backup operations cleanly. This is one of the key reasons we recommend PostgreSQL over SQLite for production n8n deployments.
Manual Backup
Run this command to create a compressed database dump:
# Create backup directory
mkdir -p ~/n8n-backups
# Dump the database (compressed)
docker exec n8n-docker-postgres-1 \
pg_dump -U n8n -d n8n --format=custom \
> ~/n8n-backups/n8n-db-$(date +%Y%m%d-%H%M%S).dump
# Verify the backup file
ls -lh ~/n8n-backups/
The --format=custom flag produces a compressed binary format that supports selective restore (individual tables) and is significantly smaller than plain SQL dumps. A typical n8n database with 50 workflows and 7 days of execution history compresses to 10–50 MB.
Automated Daily Backups with Cron
Create a backup script at ~/n8n-docker/backup-db.sh:
#!/bin/bash
# n8n PostgreSQL Backup Script
# Runs daily, retains 14 days of backups
BACKUP_DIR="$HOME/n8n-backups/db"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=14
mkdir -p "$BACKUP_DIR"
# Create the backup
docker exec n8n-docker-postgres-1 \
pg_dump -U n8n -d n8n --format=custom \
> "$BACKUP_DIR/n8n-db-$TIMESTAMP.dump" 2>/dev/null
# Verify the backup is not empty
if [ ! -s "$BACKUP_DIR/n8n-db-$TIMESTAMP.dump" ]; then
echo "ERROR: Backup file is empty or failed" >&2
rm -f "$BACKUP_DIR/n8n-db-$TIMESTAMP.dump"
exit 1
fi
# Delete backups older than retention period
find "$BACKUP_DIR" -name "n8n-db-*.dump" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: n8n-db-$TIMESTAMP.dump ($(du -h "$BACKUP_DIR/n8n-db-$TIMESTAMP.dump" | cut -f1))"
Make it executable and add it to cron:
# Make executable
chmod +x ~/n8n-docker/backup-db.sh
# Add to crontab (runs daily at 3 AM)
crontab -e
# Add this line:
0 3 * * * /home/deploy/n8n-docker/backup-db.sh >> /home/deploy/n8n-backups/backup.log 2>&1
The backup script checks that the output file is not empty before considering the backup successful. This catches scenarios where the PostgreSQL container is down or the Docker exec command fails silently — common issues that leave you with a false sense of backup security.
Backup Frequency Guidelines
| Workload | Recommended Frequency | Retention |
|---|---|---|
| Light (<20 workflows, personal) | Daily | 7 days |
| Medium (20–100 workflows, team) | Every 6 hours | 14 days |
| Heavy (100+ workflows, business-critical) | Every 2 hours | 30 days |
For medium and heavy workloads, adjust the cron schedule accordingly: 0 */6 * * * for every 6 hours, or 0 */2 * * * for every 2 hours. More frequent backups reduce the maximum data loss window (your RPO — Recovery Point Objective).
Docker Volume Backup
The n8n data volume (n8n_data) contains files that are not stored in the PostgreSQL database: custom community nodes you have installed, binary data from file-handling workflows, and certain configuration files. While the database backup captures the majority of your n8n state, a complete recovery requires the data volume as well.
#!/bin/bash
# n8n Docker Volume Backup Script
BACKUP_DIR="$HOME/n8n-backups/volumes"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=14
mkdir -p "$BACKUP_DIR"
# Backup n8n data volume
docker run --rm \
-v n8n-docker_n8n_data:/source:ro \
-v "$BACKUP_DIR":/backup \
alpine tar czf "/backup/n8n-data-$TIMESTAMP.tar.gz" -C /source .
# Backup environment files
tar czf "$BACKUP_DIR/n8n-config-$TIMESTAMP.tar.gz" \
-C "$HOME/n8n-docker" \
.env docker-compose.yml Caddyfile 2>/dev/null
# Clean old backups
find "$BACKUP_DIR" -name "n8n-data-*.tar.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR" -name "n8n-config-*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "Volume backup completed: n8n-data-$TIMESTAMP.tar.gz"
This script uses a temporary Alpine container to mount the Docker volume read-only and create a compressed archive. It also backs up your environment files (.env, docker-compose.yml, Caddyfile) which contain the configuration needed to recreate your deployment.
Combine both backup scripts into a single cron entry: create a master backup-all.sh that calls both scripts sequentially. This ensures database and volume backups are created at the same point in time, making restores consistent.
The Encryption Key — Your Most Important Backup
This deserves its own section because no other backup component has this property: if you lose the N8N_ENCRYPTION_KEY, every credential stored in n8n becomes permanently unrecoverable. There is no reset mechanism, no master key, no support ticket that can fix this.
When you first set up n8n (following our Docker guide, for example), you generate the encryption key with openssl rand -hex 32. This produces a 64-character hexadecimal string that n8n uses as the AES-256 encryption key for all credential data in PostgreSQL. Every API key, OAuth token, database password, and SMTP credential is encrypted with this key before being written to the database.
If you restore a database backup onto a new server but use a different encryption key, n8n will start normally — your workflows will load, the UI will work — but every credential will fail to decrypt. You will see errors like "Credentials could not be decrypted" when any workflow tries to authenticate with an external service. The only fix is the original key.
Where to Store the Encryption Key
- Password manager: 1Password, Bitwarden, or KeePass. Store it as a secure note with a descriptive name like "n8n Production Encryption Key — [server name]." This is the minimum.
- Separate encrypted backup: An encrypted USB drive or a file encrypted with GPG stored in a different physical location. For teams, a shared secrets vault (HashiCorp Vault, AWS Secrets Manager) is appropriate.
- Printed copy in a safe: For business-critical deployments, a printed copy in a physical safe is not paranoid — it is prudent. Digital storage can fail in correlated ways (cloud provider outage, compromised master password).
Do not store the encryption key only on the same server as n8n. If the server's disk fails and your backups are also on that disk (or the same storage system), you lose both the database and the key simultaneously. The entire point of backing up the key is to store it in a location with independent failure modes.
Verifying Your Key Is Correct
After storing the key, verify it by checking the value in your .env file matches what you saved:
# Display the current encryption key
grep N8N_ENCRYPTION_KEY ~/n8n-docker/.env
# Compare with your stored backup
# The output should be a 64-character hex string like:
# N8N_ENCRYPTION_KEY=a1b2c3d4e5f6...your64charkey
Do this verification now, before you need it. Discovering a mismatched key during a disaster recovery is a scenario you want to avoid entirely.
Off-Site Storage with rclone
Backups stored on the same server as n8n protect against accidental deletion and bad updates. They do not protect against disk failure, datacenter outages, or ransomware that encrypts the entire filesystem. Off-site storage is not optional for production deployments.
rclone is a command-line tool that syncs files to virtually any cloud storage provider. It supports S3-compatible storage (Backblaze B2, Wasabi, AWS S3, MinIO), Google Cloud Storage, Azure Blob, and over 40 other backends. Install it on your VPS:
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure a remote (interactive wizard)
rclone config
Example: Backblaze B2 Setup
Backblaze B2 is one of the most cost-effective options for backup storage: $0.006/GB/month for storage and $0.01/GB for downloads. A typical n8n backup set (14 days of daily dumps) occupies 200 MB–1 GB, costing well under $0.10/month.
# After running rclone config and creating a "b2" remote:
# Sync backups to B2
rclone sync ~/n8n-backups b2:your-n8n-backups-bucket \
--transfers 4 \
--min-age 1m
# Verify the upload
rclone ls b2:your-n8n-backups-bucket
Automated Off-Site Sync
Add the rclone sync to your backup script or as a separate cron entry that runs after backups complete:
# Add to crontab (runs 30 minutes after the backup)
30 3 * * * rclone sync /home/deploy/n8n-backups b2:your-n8n-backups-bucket --transfers 4 --min-age 1m >> /home/deploy/n8n-backups/offsite-sync.log 2>&1
Alternatively, Wasabi offers S3-compatible storage with no egress fees at $0.0069/GB/month — useful if you anticipate frequent restore testing (which you should).
Your .env file contains the database password and encryption key. When syncing to off-site storage, ensure the backup archive that contains .env is encrypted. Use rclone crypt to create an encrypted remote, or encrypt the archive with GPG before upload: gpg --symmetric --cipher-algo AES256 n8n-config-*.tar.gz
Restore Testing
A backup you have never restored is a backup you hope works. Hope is not a recovery strategy. Quarterly restore testing is the minimum for production n8n deployments — monthly is better if your workflows process financial or customer data.
Restore Procedure
To test a restore, spin up a separate VPS (a $9.58/mo instance for a few hours costs pennies) and follow these steps:
# 1. Set up Docker on the test server
# (Follow the initial setup from the Docker guide)
# 2. Copy your backup files to the test server
scp ~/n8n-backups/db/n8n-db-LATEST.dump test-server:~/
scp ~/n8n-backups/volumes/n8n-config-LATEST.tar.gz test-server:~/
# 3. On the test server: extract configuration
mkdir -p ~/n8n-docker && cd ~/n8n-docker
tar xzf ~/n8n-config-LATEST.tar.gz
# 4. Start only PostgreSQL
docker compose up -d postgres
sleep 10
# 5. Restore the database
cat ~/n8n-db-LATEST.dump | docker exec -i n8n-docker-postgres-1 \
pg_restore -U n8n -d n8n --clean --if-exists
# 6. Start the rest of the stack
docker compose up -d
# 7. Verify:
# - Can you log in to the n8n UI?
# - Do all workflows appear?
# - Can you open a credential and see it (not "decryption failed")?
# - Run a test workflow that uses an external API credential
Restore Checklist
- Login works — Your user account and password are stored in the database. If login works, the database restore is valid.
- Workflows are present — Check the workflow count matches your production instance.
- Credentials decrypt — Open any credential in the UI. If you see the stored values (e.g., an API key), the encryption key is correct. If you see a decryption error, the key in your
.envdoes not match the key that encrypted the database contents. - Test execution succeeds — Run a simple workflow that authenticates with an external service. This validates the entire chain: database restore, credential decryption, and network connectivity.
Document the results and the time it took. Your Recovery Time Objective (RTO) is the total time from "we lost the server" to "n8n is processing workflows again." For most teams following this guide, that is 30–60 minutes, including provisioning a new VPS.
Infrastructure Replication vs Application Backups
If you run n8n on MassiveGRID's high-availability infrastructure, your data is already protected by Ceph distributed storage with 3x replication. Every block of your PostgreSQL data, n8n configuration, and Docker volumes exists on three independent storage nodes. A disk failure — or even an entire storage node failure — is transparent to your running containers. Your data is intact and accessible without any intervention.
This naturally raises the question: if my data is already replicated three times, why do I need application-level backups?
Because they protect against different failure modes:
| Threat | Ceph 3x Replication | Application Backups |
|---|---|---|
| Disk failure | Protected | Not needed |
| Storage node failure | Protected | Not needed |
Accidental DROP TABLE |
Not protected (replicates the deletion) | Protected |
| Bad n8n upgrade corrupts data | Not protected (replicates corruption) | Protected |
| Workflow overwrites credentials | Not protected | Protected |
| Ransomware encrypts filesystem | Not protected (replicates encryption) | Protected (off-site copies) |
| Lost encryption key | Not applicable | Protected (if stored separately) |
Ceph replication is a storage-level protection. It ensures that hardware failures do not result in data loss. Application backups are a logic-level protection. They ensure that human errors, software bugs, and security incidents do not result in data loss. These are complementary layers, not redundant ones. You need both.
If you are on infrastructure without storage replication (most standard VPS providers use single local disks), application backups become even more critical because they are your only protection against hardware failure as well.
Complete Backup Script
Here is a unified backup script that combines all components. Save it as ~/n8n-docker/backup-n8n.sh:
#!/bin/bash
# =============================================================
# n8n Complete Backup Script
# Backs up: PostgreSQL, Docker volumes, config files
# Syncs to off-site storage via rclone
# =============================================================
set -euo pipefail
BACKUP_DIR="$HOME/n8n-backups"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=14
RCLONE_REMOTE="b2:your-n8n-backups-bucket" # Change this
LOG_FILE="$BACKUP_DIR/backup.log"
mkdir -p "$BACKUP_DIR/db" "$BACKUP_DIR/volumes"
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"; }
log "=== Backup started ==="
# 1. PostgreSQL database
log "Backing up PostgreSQL..."
docker exec n8n-docker-postgres-1 \
pg_dump -U n8n -d n8n --format=custom \
> "$BACKUP_DIR/db/n8n-db-$TIMESTAMP.dump" 2>/dev/null
if [ ! -s "$BACKUP_DIR/db/n8n-db-$TIMESTAMP.dump" ]; then
log "ERROR: Database backup failed or is empty"
rm -f "$BACKUP_DIR/db/n8n-db-$TIMESTAMP.dump"
exit 1
fi
log "Database backup: $(du -h "$BACKUP_DIR/db/n8n-db-$TIMESTAMP.dump" | cut -f1)"
# 2. n8n data volume
log "Backing up n8n data volume..."
docker run --rm \
-v n8n-docker_n8n_data:/source:ro \
-v "$BACKUP_DIR/volumes":/backup \
alpine tar czf "/backup/n8n-data-$TIMESTAMP.tar.gz" -C /source .
log "Volume backup complete"
# 3. Configuration files (encrypted)
log "Backing up config files..."
tar czf - -C "$HOME/n8n-docker" .env docker-compose.yml Caddyfile 2>/dev/null \
| gpg --batch --yes --symmetric --cipher-algo AES256 \
--passphrase-file "$HOME/.backup-passphrase" \
> "$BACKUP_DIR/volumes/n8n-config-$TIMESTAMP.tar.gz.gpg"
log "Config backup complete (encrypted)"
# 4. Clean old local backups
find "$BACKUP_DIR/db" -name "n8n-db-*.dump" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/volumes" -name "n8n-data-*.tar.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/volumes" -name "n8n-config-*.tar.gz.gpg" -mtime +$RETENTION_DAYS -delete
log "Old backups cleaned (retention: $RETENTION_DAYS days)"
# 5. Sync to off-site storage
if command -v rclone &> /dev/null; then
log "Syncing to off-site storage..."
rclone sync "$BACKUP_DIR" "$RCLONE_REMOTE" \
--transfers 4 --min-age 1m --exclude "backup.log" 2>> "$LOG_FILE"
log "Off-site sync complete"
else
log "WARNING: rclone not installed, skipping off-site sync"
fi
log "=== Backup completed successfully ==="
Set it up:
# Create a passphrase file for config encryption
openssl rand -base64 32 > ~/.backup-passphrase
chmod 600 ~/.backup-passphrase
# IMPORTANT: Store this passphrase in your password manager too
# Make the script executable
chmod +x ~/n8n-docker/backup-n8n.sh
# Add to crontab
crontab -e
# Add: 0 3 * * * /home/deploy/n8n-docker/backup-n8n.sh
After the first backup run, check ~/n8n-backups/backup.log to confirm everything completed without errors. Then run rclone ls b2:your-n8n-backups-bucket (or your equivalent) to verify files reached off-site storage. Do not assume it works — verify it works.
Workflow JSON Exports — A Secondary Safety Net
In addition to database backups, n8n allows you to export individual workflows as JSON files. These exports contain the workflow structure and node configurations but not the credentials — making them safe to store in version control (Git) and useful for migrating workflows between instances.
# Export all workflows via n8n CLI
docker exec n8n-docker-n8n-1 n8n export:workflow --all \
--output=/home/node/.n8n/workflow-exports/
# Copy exports from the container volume
docker cp n8n-docker-n8n-1:/home/node/.n8n/workflow-exports/ \
~/n8n-backups/workflow-json/
This is a useful secondary backup because workflow JSON files are human-readable, version-controllable, and portable between n8n instances (even with different encryption keys). They do not replace database backups — you still need pg_dump for credentials, execution history, and user accounts — but they provide an additional recovery path if you need to rebuild on a completely fresh instance.
Next Steps
You now have a complete backup strategy: automated PostgreSQL dumps, Docker volume archives, encrypted off-site storage, and a tested restore procedure. Here is where to go from here.
Security hardening — Backups protect your data at rest. Our security hardening checklist covers protecting it in transit and at the application layer: firewall rules, access controls, credential management, and infrastructure-level protections.
Scaling your deployment — As your workflow count grows, backup sizes and frequencies may need adjustment. If you are scaling to queue mode with multiple workers, ensure your backup script captures all relevant Docker volumes. See our VPS sizing guide for resource planning.
Getting started — If you followed this guide before setting up n8n, our complete Docker deployment guide walks you through the initial installation. Come back here after your instance is running to set up backups from day one.
When MassiveGRID is not needed: If your n8n instance runs personal automations and downtime is merely inconvenient, daily pg_dump to the same server is a reasonable starting point. Off-site storage and HA infrastructure become important when your workflows process business data, customer information, or financial transactions — when the cost of data loss exceeds the cost of prevention.