n8n has become the workflow automation platform of choice for teams that refuse to hand their data to third-party SaaS providers. Over 60,000 companies run n8n in production today, and the majority of serious deployments are self-hosted. The reasons are straightforward: you own your data, you control your costs, and you eliminate the per-execution pricing that makes platforms like Zapier and Make increasingly expensive as your automation scales.
But most self-hosting guides leave you with a half-finished setup — n8n running on SQLite behind a manually configured Nginx reverse proxy, with no backup strategy and no thought given to what happens when your workflow count exceeds a dozen. This guide gives you a production-ready n8n deployment in about 20 minutes: PostgreSQL for the database, Caddy for automatic SSL, environment hardening for security, and performance tuning that will carry you from your first workflow to your thousandth.
If you are still evaluating whether self-hosting makes financial sense for your use case, read our cost comparison of self-hosted n8n vs. n8n Cloud vs. Zapier first. The short version: self-hosting on a $9.58/mo VPS replaces $200+/mo in SaaS fees for most teams running 20+ active workflows.
Prerequisites
Before you start, you need three things:
- A VPS with at least 2 vCPU, 4 GB RAM, and 64 GB SSD storage. This is the minimum for a production n8n instance running PostgreSQL alongside the main application. n8n itself consumes 300–500 MB of RAM at idle and spikes to 1–2 GB during complex workflow executions. PostgreSQL needs another 512 MB–1 GB depending on connection pool size. The remaining headroom is for the OS, Docker overhead, and Caddy.
- A domain name (or subdomain) pointed at your server's IP address via an A record. For example:
n8n.yourdomain.com. Caddy will handle SSL certificate provisioning automatically, but the DNS record must resolve before you launch the stack. - Basic terminal familiarity. You should be comfortable with SSH, editing files, and running commands as root or via
sudo.
Recommended VPS Configuration
On MassiveGRID's Cloud VPS, you can configure resources independently. Here is the math for the recommended starter configuration:
| Resource | Amount | Unit Price | Monthly Cost |
|---|---|---|---|
| vCPU | 2 cores | $2.87/core | $5.74 |
| RAM | 4 GB | $0.80/GB | $3.20 |
| SSD Storage | 64 GB | $0.01/GB | $0.64 |
| Total | $9.58/mo | ||
This is enough to run 50–100 active workflows with moderate execution frequency. For guidance on choosing the right specs for larger workloads, see our dedicated best VPS for n8n guide.
Choose a data center location close to the services your workflows interact with. If your workflows primarily call APIs hosted in the US or EU, pick New York, London, or Frankfurt. MassiveGRID operates in all four locations with the same pricing.
Step 1 — Initial Server Setup
SSH into your fresh VPS. All commands below assume Ubuntu 22.04 or 24.04 LTS, which is what we recommend for n8n deployments. Other Debian-based distributions will work with minor adjustments.
Update Packages and Create a Deploy User
# Update system packages
sudo apt update && sudo apt upgrade -y
# Create a non-root user for running services
sudo adduser deploy
sudo usermod -aG sudo deploy
# Switch to the new user
su - deployRunning n8n as root is unnecessary and increases your attack surface. The deploy user has sudo access when needed, but Docker containers will run under their own namespaced user contexts.
Install Docker Engine and Docker Compose v2
Install Docker using the official repository. Do not use the docker.io package from Ubuntu's default repositories — it is often several versions behind and lacks Compose v2 integration.
# Install prerequisites
sudo apt install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine + Compose plugin
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add deploy user to the docker group
sudo usermod -aG docker deploy
# Apply group changes (or log out and back in)
newgrp docker
# Verify installation
docker --version
docker compose versionYou should see Docker Engine 27.x+ and Docker Compose v2.x+. If docker compose version returns an error, your installation used the legacy standalone binary. Reinstall using the steps above.
Step 2 — Docker Compose Configuration
Create a project directory and the main Compose file. This configuration runs three services: n8n (the application), PostgreSQL 16 (the database), and Caddy (the reverse proxy and SSL terminator).
# Create project directory
mkdir -p ~/n8n-docker && cd ~/n8n-docker
# Create the Docker Compose file
nano docker-compose.ymlPaste the following complete configuration:
version: "3.8"
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- n8n-network
depends_on:
- n8n
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
environment:
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_HOST}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- TZ=${GENERIC_TIMEZONE}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_USER_MANAGEMENT_JWT_SECRET=${N8N_JWT_SECRET}
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168
volumes:
- n8n_data:/home/node/.n8n
networks:
- n8n-network
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
volumes:
caddy_data:
caddy_config:
n8n_data:
postgres_data:
networks:
n8n-network:What Each Service Does
- Caddy listens on ports 80 and 443, handles TLS certificate provisioning via Let's Encrypt automatically, and proxies all traffic to n8n on its internal port 5678. No manual certificate renewal. No Nginx configuration. No certbot cron jobs.
- n8n runs the workflow engine. It connects to PostgreSQL instead of the default SQLite backend. This is critical for production — SQLite locks the entire database on writes, which causes execution failures when multiple workflows trigger simultaneously.
- PostgreSQL 16 stores all workflow definitions, credentials (encrypted), execution history, and user data. The
healthcheckblock ensures n8n does not attempt to connect before PostgreSQL is ready to accept connections.
Now create the Caddyfile in the same directory:
# Create the Caddyfile
nano CaddyfilePaste this configuration:
{
email your-email@example.com
}
n8n.yourdomain.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}Replace your-email@example.com with your actual email (used for Let's Encrypt notifications) and n8n.yourdomain.com with your actual domain. The flush_interval -1 directive disables response buffering, which is required for n8n's server-sent events (SSE) used in the workflow editor's real-time updates.
Do not expose port 5678 directly. All traffic must flow through Caddy. Exposing n8n's port bypasses SSL and authentication, leaving your instance — and every stored credential — accessible to anyone who finds your server's IP address.
Ready to deploy n8n?
Skip the research — configure your VPS in 60 seconds.
Recommended: 2 vCPU / 4 GB RAM / 64 GB SSD — $9.58/mo
Configure Your VPS →How to Self-Host n8n on a High-Availability VPS (Complete Docker Guide)
Why Self-Host n8n
n8n Cloud's Starter plan costs €24/month and caps you at 2,500 workflow executions. That sounds reasonable until you connect a Stripe webhook, a Shopify order flow, and a CRM sync — and burn through 2,500 executions in a week. The Pro plan jumps to €60/month for 10,000 executions, and the Enterprise tier goes higher still. Meanwhile, a self-hosted n8n instance running on a VPS costs under $10/month with zero execution limits. You own the infrastructure, you own the data, and the only ceiling is your server's hardware.
But self-hosting n8n is not the same as self-hosting a blog. n8n runs 24/7. It listens for webhooks from external services — Stripe payment events, Shopify orders, GitHub commits, form submissions. If your server goes down at 3 AM, those webhooks silently fail. Stripe doesn't retry indefinitely. Shopify moves on. Your CRM sync breaks without anyone noticing until a customer complains. The cost of downtime with n8n is not "my website is offline" — it's "business-critical automations are silently dropping data."
This guide walks you through a production-ready n8n Docker setup in about 30 minutes: n8n with PostgreSQL for reliable data storage, Caddy for automatic SSL, and a configuration that is ready for high-availability infrastructure from day one. Self-hosting requires comfort with Docker, Linux, and the command line — if you have that, the savings and control are significant.
Prerequisites & VPS Sizing
n8n's official documentation suggests 1 vCPU and 2 GB RAM as minimum requirements. That is enough to run n8n alone with SQLite in test mode. It is not enough for a production stack with PostgreSQL, a reverse proxy, and workflows that process real data. Here is what you actually need.
Minimum Production Specs
- CPU: 2 vCPU (handles n8n execution workers + PostgreSQL queries concurrently)
- RAM: 4 GB (the real bottleneck — see breakdown below)
- Storage: 64 GB NVMe SSD (PostgreSQL write performance matters)
- OS: Ubuntu 24.04 LTS
Where the Memory Goes
| Component | RAM Usage |
|---|---|
| PostgreSQL 16 | 500 MB – 1 GB |
| n8n (main process + workers) | ~500 MB |
| Docker engine overhead | ~200 MB |
| Caddy reverse proxy | ~50 MB |
| OS + system services + buffer | 500 MB – 1 GB |
Total: 1.75 – 2.75 GB at steady state. That leaves headroom on a 4 GB server for workflow execution spikes — complex workflows that process large JSON payloads or run multiple HTTP requests in parallel can temporarily consume 500 MB–1 GB of additional memory. With only 2 GB of total RAM, you would be swapping to disk during these spikes, which kills PostgreSQL performance.
NVMe storage matters specifically because of PostgreSQL. Every workflow execution writes to the database — execution data, status updates, error logs. On a traditional SATA SSD, write latency under load can spike to 5–10ms. NVMe keeps write latency under 1ms even during heavy WAL (write-ahead log) flushes. Over thousands of daily executions, this adds up.
Configure a production-ready n8n VPS
2 vCPU / 4 GB RAM / 64 GB NVMe — (2×$2.87) + (4×$0.80) + (64×$0.01) = $9.58/mo
Configure Your VPS →Initial Server Setup
Start with a fresh Ubuntu 24.04 LTS server. SSH in as root and run through the baseline security and tooling setup. These steps take about 10 minutes.
Create a Non-Root User
Running Docker as root is convenient but risky. Create a dedicated user with sudo access:
# Create user and add to sudo group
adduser deploy
usermod -aG sudo deploy
# Copy SSH keys to the new user
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy
# Test login in a new terminal before continuing
ssh deploy@your-server-ip
Configure the Firewall
UFW (Uncomplicated Firewall) ships with Ubuntu. Enable it with only the ports n8n needs:
# Allow SSH, HTTP, and HTTPS
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Enable firewall
sudo ufw enable
# Verify
sudo ufw status
Port 80 is required even though you will serve everything over HTTPS. Caddy uses port 80 for the ACME HTTP-01 challenge to issue and renew Let's Encrypt certificates automatically.
System Updates
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl git wget apt-transport-https ca-certificates gnupg lsb-release
Install Docker & Compose v2
Install Docker Engine using Docker's official repository — not the docker.io package from Ubuntu's repos, which often lags several versions behind:
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine + Compose plugin
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add your user to the docker group (avoids needing sudo for docker commands)
sudo usermod -aG docker deploy
newgrp docker
Verify the installation:
docker --version
# Docker version 27.x.x, build xxxxx
docker compose version
# Docker Compose version v2.x.x
If docker compose version returns an error, you may have the older standalone docker-compose (v1). The steps above install the v2 plugin, which is invoked as docker compose (without the hyphen). All commands in this guide use the v2 syntax.
Production Docker Compose Configuration
This is the core of the setup. The Docker Compose file defines three services: n8n itself, PostgreSQL 16 as the database backend, and Caddy as a reverse proxy handling SSL termination. Create the project directory and files:
mkdir -p ~/n8n-docker && cd ~/n8n-docker
Why PostgreSQL Over SQLite
n8n defaults to SQLite for simplicity. For a production self-hosted instance, PostgreSQL is the correct choice for three reasons:
- Concurrency: SQLite uses file-level locking. When a workflow executes and writes its result while another workflow starts simultaneously, one blocks the other. PostgreSQL handles concurrent reads and writes without contention.
- Reliability: PostgreSQL has WAL (write-ahead logging), point-in-time recovery, and crash recovery built in. SQLite can corrupt on unexpected shutdowns — a real risk on infrastructure without UPS/HA.
- Backup tooling:
pg_dumpproduces consistent, point-in-time backups while n8n is running. SQLite backups require either stopping n8n or using.backupwhich can miss in-flight transactions.
The Complete docker-compose.yml
Create the file at ~/n8n-docker/docker-compose.yml:
version: "3.8"
services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: n8n
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
networks:
- n8n-network
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
# Database
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
# n8n configuration
N8N_HOST: ${N8N_HOST}
N8N_PORT: 5678
N8N_PROTOCOL: https
WEBHOOK_URL: https://${N8N_HOST}/
# Security
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
# Timezone and execution settings
GENERIC_TIMEZONE: ${GENERIC_TIMEZONE:-UTC}
TZ: ${GENERIC_TIMEZONE:-UTC}
EXECUTIONS_DATA_PRUNE: "true"
EXECUTIONS_DATA_MAX_AGE: 168
# Optional: set execution mode
EXECUTIONS_MODE: regular
volumes:
- n8n_data:/home/node/.n8n
networks:
- n8n-network
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- n8n
networks:
- n8n-network
volumes:
postgres_data:
n8n_data:
caddy_data:
caddy_config:
networks:
n8n-network:
driver: bridge
Environment Variables
Create the .env file in the same directory. This keeps secrets out of the Compose file:
# ~/n8n-docker/.env
# PostgreSQL password - generate a strong random password
POSTGRES_PASSWORD=your-strong-database-password-here
# Your domain - the FQDN pointing to this server
N8N_HOST=n8n.yourdomain.com
# Encryption key for credentials stored in the database
N8N_ENCRYPTION_KEY=your-64-char-hex-key-here
# Timezone (IANA format)
GENERIC_TIMEZONE=UTC
Generate the encryption key with openssl rand -hex 32. This key encrypts all credentials (API keys, OAuth tokens, database passwords) stored in n8n's database. Save this key somewhere safe outside the server. If you lose it, every credential in n8n becomes permanently inaccessible — you cannot decrypt them, and you cannot recover them. There is no reset mechanism. Back it up in your password manager immediately.
Key Configuration Decisions Explained
restart: unless-stopped— Containers restart automatically after crashes or server reboots, but respect manualdocker compose stopcommands. Use this overalwaysto retain manual control during debugging.EXECUTIONS_DATA_PRUNE: true+EXECUTIONS_DATA_MAX_AGE: 168— Automatically deletes execution data older than 7 days (168 hours). Without this, theexecution_entitytable grows indefinitely and degrades PostgreSQL performance within weeks.WEBHOOK_URL— Explicitly sets the external URL for webhook triggers. n8n uses this to generate the callback URLs it displays in the UI. If this is wrong, every webhook URL your workflows generate will be wrong.- PostgreSQL
healthcheck— Thedepends_on: condition: service_healthypattern ensures n8n doesn't start until PostgreSQL is actually accepting connections, not just when the container is running. Without this, n8n may crash on first boot with "connection refused" errors. - Named volumes —
postgres_data,n8n_data,caddy_data, andcaddy_configpersist data across container restarts and upgrades. Never use bind mounts for PostgreSQL data in production — named volumes let Docker manage filesystem permissions correctly.
DNS & SSL Configuration
Before starting the stack, configure DNS. Add an A record in your domain's DNS settings:
Type: A
Name: n8n (or your chosen subdomain)
Value: YOUR_VPS_IP_ADDRESS
TTL: 300
Wait for DNS propagation (usually 1–5 minutes with a low TTL, up to 48 hours with high TTL values). Verify with dig n8n.yourdomain.com before proceeding.
Caddyfile
Create ~/n8n-docker/Caddyfile:
n8n.yourdomain.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}
That is the entire Caddy configuration. Caddy automatically provisions a Let's Encrypt TLS certificate on first request, handles renewal (certificates renew 30 days before expiry), and redirects HTTP to HTTPS. No certbot, no cron jobs, no certificate management of any kind. The flush_interval -1 directive disables response buffering, which is important for n8n's server-sent events (SSE) used by the workflow editor's real-time updates.
Launch the Stack
cd ~/n8n-docker
docker compose up -d
Watch the logs to confirm everything starts cleanly:
docker compose logs -f
You should see PostgreSQL report "database system is ready to accept connections," followed by n8n logging "n8n ready on 0.0.0.0, port 5678," followed by Caddy obtaining the TLS certificate. The first certificate issuance takes 10–30 seconds. If Caddy logs a TLS error, DNS has not propagated yet — wait and retry.
First Login & Post-Install Configuration
Navigate to https://n8n.yourdomain.com in your browser. You will see n8n's setup wizard prompting you to create the initial owner account. Use a strong, unique password — this account has full access to all workflows, credentials, and settings.
Configure SMTP for Email Notifications
Go to Settings → SMTP and configure an email sender. n8n uses SMTP for workflow error notifications and sharing invitations. Any transactional email provider works — Mailgun, Postmark, Amazon SES, or even Gmail with an app password. Set the "Sender Email" to a no-reply address on your domain.
Test Your First Webhook
Create a simple test workflow to verify the entire stack end-to-end:
- Create a new workflow. Add a Webhook trigger node.
- Click "Listen for test event" in the webhook node.
- Copy the test webhook URL (it should start with
https://n8n.yourdomain.com/webhook-test/). - From your local machine, send a test request:
curl -X POST https://n8n.yourdomain.com/webhook-test/your-id -d '{"test": true}' - Confirm the data appears in n8n's UI. If it does, SSL, DNS, reverse proxy, and n8n are all working correctly.
Activate the workflow and test again using the production webhook URL (without -test in the path). This confirms that n8n processes webhooks even when you are not actively viewing the workflow in the editor.
Why High Availability Matters for n8n
Most self-hosting guides stop here. You have a working n8n instance, SSL is green, webhooks are flowing. The setup above is production-quality software configuration. The question is whether the infrastructure underneath it matches.
The Silent Failure Problem
n8n does not have a built-in external health monitor. If the server goes down, n8n does not send you a notification — because n8n itself is down. External services that send webhooks to your instance will receive connection timeouts. Some services (like Stripe) retry with exponential backoff for a few hours. Others (like many SaaS webhook integrations) try once and move on. Scheduled workflows simply do not fire. There is no queue of missed executions waiting for you when the server comes back.
You can add external uptime monitoring (UptimeRobot, Healthchecks.io) to detect downtime quickly. But detection and recovery are different things. On a standard VPS, "recovery" means:
- You get an alert (if your monitoring is set up correctly).
- You SSH in to diagnose the issue — unless the host machine is the problem, in which case SSH is also down.
- If it is a hardware failure, the hosting provider replaces or migrates the hardware. This takes 30 minutes to several hours depending on the provider and time of day.
- Your n8n instance comes back online. Webhooks received during the outage are gone.
At 3 AM on a Saturday, that timeline extends considerably.
How HA Infrastructure Changes the Equation
MassiveGRID's high-availability infrastructure is built on Proxmox VE with automatic failover. Your VPS runs on a cluster of physical nodes, not a single machine. If the underlying hardware fails, the Proxmox HA manager detects the failure and restarts your VM on a healthy node automatically — typically within 30–60 seconds, not hours.
Storage is equally important. Standard VPS storage uses local disks — if the physical drive fails, your data is at risk. MassiveGRID's infrastructure uses Ceph distributed storage with 3x replication: every block of your PostgreSQL data, your n8n configuration, and your Docker volumes exists on three independent storage nodes. A single disk failure, or even an entire storage node failure, is invisible to your running containers.
The practical difference: with standard VPS hosting, a hardware failure is an incident that requires human intervention and causes data loss risk. With HA infrastructure, a hardware failure is a log entry you review the next business day. Your n8n webhooks keep flowing, your workflows keep executing, and your Stripe events keep processing — even while the infrastructure team replaces the failed component.
This is not about "premium hosting." It is about matching infrastructure reliability to workload requirements. If your n8n instance processes test webhooks for a side project, a standard VPS is perfectly fine. If it processes payment events, order fulfillment triggers, or customer-facing automations, the cost of an HA VPS is trivially small compared to the cost of missed events during an outage.
Basic Backup Strategy
High-availability infrastructure protects you from hardware failure. Backups protect you from a different class of problem: human error, accidental deletion, workflow corruption, and bad updates. You need both.
Database Backups with pg_dump
Create a cron job that dumps the PostgreSQL database daily:
# Create backup directory
mkdir -p ~/n8n-backups
# Add to crontab (crontab -e)
0 3 * * * docker exec n8n-docker-postgres-1 pg_dump -U n8n n8n | gzip > ~/n8n-backups/n8n-db-$(date +\%Y\%m\%d).sql.gz
# Retain last 14 days, delete older backups
5 3 * * * find ~/n8n-backups -name "*.sql.gz" -mtime +14 -delete
This runs at 3 AM daily, producing a compressed SQL dump. The pg_dump command creates a consistent snapshot even while n8n is actively running — one of the key advantages of PostgreSQL over SQLite for production use.
Docker Volume Backup
The n8n data volume contains configuration that is not in the database, including custom node files and local binary data. Back it up alongside the database:
# Backup n8n data volume
docker run --rm -v n8n-docker_n8n_data:/data -v ~/n8n-backups:/backup \
alpine tar czf /backup/n8n-data-$(date +%Y%m%d).tar.gz -C /data .
The Encryption Key
Your N8N_ENCRYPTION_KEY is the single most critical piece of data to back up. Without it, database backups are only partially useful — you can restore workflows and execution history, but every stored credential (API keys, OAuth tokens, database passwords) becomes unreadable. Store the encryption key in your password manager, a separate encrypted backup location, or both. Do not store it only on the server.
For a comprehensive backup strategy including automated off-site replication, encryption at rest, and disaster recovery procedures, see our detailed guide: n8n Backup & Disaster Recovery.
Ceph 3x replication on MassiveGRID's HA infrastructure protects against disk and node failure at the hardware level. Your backups protect against everything else — accidental DROP TABLE, a bad n8n upgrade, or a workflow that overwrites its own credentials. These are complementary protections, not redundant ones.
Next Steps
You now have a production-grade n8n instance: PostgreSQL for reliable storage, Caddy for automatic SSL, proper environment configuration, and a basic backup routine. Here is where to go from here depending on your needs.
Right-sizing your VPS — If you are unsure whether 2 vCPU / 4 GB is the right starting point for your workload, our Best VPS for n8n guide breaks down resource requirements by workflow complexity and execution volume, with specific sizing tiers from light personal use to heavy production workloads.
Scaling with queue mode — As your workflow count grows, the single-process execution model becomes a bottleneck. n8n's queue mode with Redis and separate worker processes distributes execution across multiple containers or even multiple servers. Our n8n Queue Mode with Redis Workers guide covers the architecture, Docker Compose configuration, and when it actually becomes necessary (hint: later than most people think).
Security hardening — The setup in this guide is secure by default (HTTPS, non-root user, firewall), but there are additional layers worth implementing: rate limiting, IP allowlisting for the admin UI, webhook authentication, and content security headers. The n8n Security Hardening Checklist covers each one with specific configuration.
Comprehensive backups — The basic cron-based backup in this guide is a starting point. For automated off-site backups, encrypted storage, retention policies, and tested restore procedures, see n8n Backup & Disaster Recovery.
Self-hosting n8n is a meaningful commitment to infrastructure ownership. The tradeoff is clear: you save substantially on execution costs and gain complete data control, in exchange for managing a Linux server and Docker stack. With the setup in this guide and the right infrastructure underneath it, that tradeoff works decisively in your favor.
Deploy n8n on infrastructure that doesn't go down
High-availability VPS with Proxmox failover, Ceph storage, and 24/7 human support.
Recommended: 2 vCPU / 4 GB RAM / 64 GB SSD — $9.58/mo
Configure Your VPS →