When You Need Queue Mode

By default, n8n runs as a single Node.js process. That one process handles the web UI, listens for webhook triggers, polls cron schedules, and executes every active workflow — all in the same event loop. For a dozen lightweight automations that fire a few times per hour, this works. The process sits at 200–300 MB of RAM and barely registers on CPU monitors between execution cycles.

The cracks appear when your workload grows. Here are the signs that you have outgrown single-process mode:

If you are seeing any of these symptoms, queue mode is the architectural fix. It separates the work of triggering workflows from the work of executing them, so no single long-running job can block the rest of your automation stack.

Architecture Overview

Queue mode splits n8n into distinct components, each with a focused responsibility. Instead of one process doing everything, you get a pipeline:

  +-------------------+
  |   n8n Main        |
  |  (UI + Triggers   |
  |   + Webhooks)     |
  +---------+---------+
            |
    Push execution jobs
            |
  +---------v---------+
  |   Redis           |
  |  (Job Queue /     |
  |   Message Broker) |
  +---------+---------+
            |
    Workers pull jobs
            |
  +---------v---------+----------+-----------+
  | Worker 1          | Worker 2 | Worker N  |
  | (Execute workflow)| (Execute)| (Execute) |
  +-------------------+----------+-----------+
            |               |          |
            +---------------+----------+
                            |
                  +---------v---------+
                  |   PostgreSQL      |
                  |  (Shared State)   |
                  +-------------------+

Main process — runs with command: start. It serves the n8n web editor and REST API, registers and listens for all trigger nodes (cron schedules, webhooks, polling triggers), and pushes execution jobs into the Redis queue. When N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true is set, the main process does not execute any workflows itself — it strictly hands off work to workers.

Redis — acts as the message broker between the main process and workers. n8n uses the Bull queue library internally. Redis stores jobs until a worker picks them up, handles job priority and ordering, manages retry logic for failed executions, and provides dead-letter queuing so failed jobs are not silently lost.

Worker processes — run with command: worker. Each worker is an independent Node.js process that connects to the same Redis instance and the same PostgreSQL database. Workers pull jobs from the queue on a first-come, first-served basis, execute the workflow logic, and write results (execution data, status, error logs) back to the database. You can run as many workers as your server resources allow.

PostgreSQL — the shared database that all processes read from and write to. Workflow definitions, credentials (encrypted), execution history, and settings all live in PostgreSQL. This is a hard requirement for queue mode — the default SQLite database does not support concurrent writes from multiple processes and will corrupt data if you try. If you followed our initial self-hosting guide and used SQLite, you will need to migrate to PostgreSQL before enabling queue mode.

Info

Queue mode requires PostgreSQL and Redis. SQLite does not support concurrent writes from multiple processes. If your current n8n deployment uses SQLite, migrate your data to PostgreSQL first. The n8n CLI provides an export:workflow command to help with this.

Docker Compose for Queue Mode

Below is a complete, production-ready Docker Compose configuration. Every environment variable is included — no placeholders to guess at, no missing dependencies. Copy these files, fill in your secrets, and deploy.

Environment File

Create the .env file first. This keeps secrets out of your Compose file and makes it safe to version-control docker-compose.yml separately.

# .env
# Generate with: openssl rand -hex 32
N8N_ENCRYPTION_KEY=your_encryption_key_here

# Database credentials
POSTGRES_PASSWORD=your_strong_database_password_here

# Your domain (used for webhook URLs)
N8N_DOMAIN=n8n.yourdomain.com

# Timezone
GENERIC_TIMEZONE=America/New_York
Critical Warning

N8N_ENCRYPTION_KEY MUST be identical across the main process and all workers. This key encrypts every credential stored in the database — API keys, OAuth tokens, database passwords. If a worker has a different encryption key than the main process, it cannot decrypt credentials, and every workflow execution that requires authentication will fail silently. You will see nodes returning 401 errors, database connection failures, and API authentication errors with no obvious cause. Generate one key with openssl rand -hex 32 and use it everywhere.

Docker Compose Configuration

# docker-compose.yml
services:
  n8n-main:
    image: n8nio/n8n:latest
    command: start
    environment:
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true
      - WEBHOOK_URL=https://${N8N_DOMAIN}
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - N8N_LOG_LEVEL=info
    ports:
      - "5678:5678"
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped

  n8n-worker:
    image: n8nio/n8n:latest
    command: worker
    environment:
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - N8N_LOG_LEVEL=info
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped
    deploy:
      replicas: 2

  redis:
    image: redis:7-alpine
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_data:/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  postgres:
    image: postgres:16-alpine
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n -d n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  n8n_data:
  redis_data:
  postgres_data:

Deploy the entire stack with a single command:

docker compose up -d

This starts the main process, two worker replicas, Redis (with a 128 MB memory cap), and PostgreSQL. The depends_on conditions ensure PostgreSQL and Redis are healthy before n8n processes attempt to connect.

Key Configuration Details

Scaling Workers

One of queue mode's primary advantages is that you can adjust the number of workers without changing any configuration files. Docker Compose handles the scaling natively:

docker compose up -d --scale n8n-worker=3

This command brings the total worker count to three. Each worker independently pulls jobs from the Redis queue, so adding a third worker increases your throughput by roughly 50% (assuming the bottleneck was execution capacity, not PostgreSQL or Redis).

The practical constraint is server memory. Each n8n worker process consumes between 200 MB and 500 MB of RAM depending on the complexity of your active workflows. Workflows that process large JSON payloads, handle file attachments, or call AI APIs sit at the higher end. Simple HTTP-to-Slack workflows sit at the lower end.

Here is a realistic breakdown of how many workers each VPS tier can sustain:

VPS Config Workers Reserved For MassiveGRID Price
2 vCPU / 4 GB RAM 1–2 ~2.5 GB for main + PostgreSQL + Redis $9.58/mo
4 vCPU / 8 GB RAM 2–4 ~3 GB for main + PostgreSQL + Redis $19.16/mo
8 vCPU / 16 GB RAM 4–8 ~4 GB for main + PostgreSQL + Redis $38.32/mo

The "Reserved For" column accounts for the n8n main process (~300 MB), PostgreSQL (~500 MB–1 GB depending on database size), Redis (~128 MB capped), Docker overhead (~200 MB), and operating system requirements (~500 MB). Everything remaining is available for workers.

A key advantage of MassiveGRID's VPS configurator is independent resource scaling. If you're running 4 workers on a 4 vCPU / 8 GB setup and need more worker capacity, you can increase RAM to 12 or 16 GB without changing your CPU allocation. You pay for the RAM you need without overpaying for CPU cores you don't. This matches how n8n queue mode actually scales — each additional worker needs RAM far more than it needs CPU.

Tip

Start with 2 workers and monitor. Add workers only when you observe execution backlog in the n8n UI (executions waiting longer than a few seconds to start). Over-provisioning workers wastes RAM that PostgreSQL could use for caching, which hurts overall performance.

Monitoring and Troubleshooting

Queue mode introduces more moving parts than single-process mode. Effective monitoring means watching each component independently.

Container resource usage

The quickest way to check per-container CPU and memory consumption:

docker stats --no-stream

This prints a single snapshot showing CPU%, memory usage, and network I/O for every running container. Run it periodically or pipe it to a log file via cron for historical analysis. Look for workers that consistently sit above 400 MB of RAM — those are processing heavy workflows and may benefit from additional workers to distribute the load.

Redis queue depth

Check how many jobs are waiting in the Redis queue:

docker exec n8n-redis redis-cli LLEN bull:n8n:jobs:wait

A consistently growing number means your workers cannot keep up with incoming execution requests. The fix is either adding more workers or optimizing workflow logic to reduce per-execution time. A number that spikes during peak hours but drains between peaks is normal — your current worker count can handle the average load, just not the burst.

Redis memory usage

Verify Redis is operating within its configured memory limit:

docker exec n8n-redis redis-cli INFO memory | grep used_memory_human

If Redis is approaching its --maxmemory limit (128 MB in our configuration), the allkeys-lru eviction policy will start removing older keys. For n8n's queue data, this is generally safe — completed job records are the first to go. But if Redis is consistently at capacity, increase the limit to 256 MB or investigate whether stuck/failed jobs are accumulating.

Worker log patterns

Each worker logs execution activity. Check for successful pickups and completions:

docker compose logs n8n-worker --tail 50

Healthy log output includes lines like "Worker started execution" and "Execution finished successfully". Watch for repeated error patterns: connection refused (Redis or PostgreSQL down), timeout errors (long-running workflows exceeding limits), or encryption key mismatches (worker cannot decrypt credentials).

Warning

If workers log Error: The encryption key in the settings does not match, the worker's N8N_ENCRYPTION_KEY does not match the main process. Workers cannot execute workflows that use stored credentials until this is fixed. The encryption key must be identical across all n8n containers.

PostgreSQL connection count

Each n8n process (main + each worker) opens multiple database connections. Monitor the total:

docker exec n8n-postgres psql -U n8n -c "SELECT count(*) FROM pg_stat_activity WHERE datname='n8n';"

PostgreSQL's default max_connections is 100. With a main process and 4 workers, you will typically see 15–25 active connections. If connection count approaches 80+, you need to either tune PostgreSQL's max_connections or reduce worker count.

Why HA Matters More at Scale

Single-process n8n is one Docker container with one database. A hardware failure takes down one thing. Queue mode adds Redis, multiple workers, and heavier PostgreSQL usage. A hardware failure now takes down the entire distributed pipeline simultaneously — main process, all workers, Redis, and PostgreSQL all stop at once.

Recovery is also more complex. On a standard VPS, after a hardware event, the provider migrates your disk to new hardware and boots the VM. Docker starts, and your docker-compose stack comes back up. But queue mode containers have startup dependencies: PostgreSQL must be healthy before n8n processes connect, Redis must be healthy before workers start pulling jobs. If these health checks fail because the database is recovering or Redis is replaying its AOF log, containers enter restart loops that can add minutes to your recovery time.

On a Proxmox HA cluster like MassiveGRID's infrastructure, the failure scenario plays out differently. The HA manager detects the node failure within seconds and restarts your VM on a healthy node. Because the underlying storage is Ceph — a distributed storage system that replicates data 3 times across independent storage nodes — PostgreSQL's data files are intact and consistent on the new node. There is no disk migration step. The VM boots, Docker starts, health checks pass, and your queue mode stack resumes processing.

The practical difference: minutes of downtime and possible data recovery issues versus seconds of downtime with data intact. For a queue mode deployment processing hundreds or thousands of daily executions, those minutes translate to a backlog of webhook retries, stale scheduled triggers, and execution state that may need manual reconciliation.

Info

For hobby automation or low-volume personal workflows, single-process mode on any reliable VPS is perfectly adequate. Queue mode with HA becomes a genuine operational requirement when your business processes depend on uninterrupted execution — financial webhooks, customer onboarding sequences, inventory sync, or anything where silent failure costs real money.

When to Upgrade

Queue mode does not eliminate resource constraints — it distributes them. Eventually your VPS will hit a ceiling. Here are the signals that it is time to scale up:

MassiveGRID's configurator lets you scale each resource independently. A typical upgrade path for queue mode:

  1. Start: 4 vCPU / 8 GB / 128 GB SSD ($19.16/mo) — main process + 2–3 workers
  2. Add workers: 4 vCPU / 16 GB / 128 GB SSD ($25.56/mo) — scale to 5–6 workers without touching CPU
  3. Add CPU: 8 vCPU / 16 GB / 256 GB SSD ($38.32/mo) — for CPU-bound AI workflows or heavy data transformations

This incremental approach prevents the common pattern of jumping two tiers at once to a configuration that's oversized and overpriced for your actual workload. Monitor first, identify the specific bottleneck, and add only the resource you need.

Next Steps

Queue mode is one piece of a production n8n deployment. For the complete picture: