Search “best VPS for n8n” and you’ll find the same article recycled across a dozen review sites. They benchmark Time to First Byte. They measure PHP requests per second. They score providers on how fast WordPress renders a homepage. Then they rank those providers and tell you to pick one for your automation stack.

This is like evaluating delivery trucks by their paint quality. The metric is real. It’s measurable. It’s also almost completely irrelevant to the job you’re hiring the vehicle to do.

We’ve watched this disconnect grow for years. Automation workloads — n8n, Make-style self-hosted alternatives, Node-RED, custom ETL pipelines — have fundamentally different infrastructure requirements than websites. The VPS review industry hasn’t caught up. Here’s why that matters and what to measure instead.

The Mismatch: Websites vs. Automation Stacks

A website is stateless, bursty, and read-heavy. A visitor requests a page. The server compiles the template, queries the database (mostly SELECT statements), and returns HTML. The CPU spikes for 50–200 milliseconds, then goes idle. The storage pattern is almost entirely sequential reads — loading cached assets, reading static files, pulling pre-compiled PHP opcodes from disk. Between requests, the server does essentially nothing.

An automation stack is the opposite in every dimension. n8n runs persistent Node.js processes 24 hours a day. Its PostgreSQL backend doesn’t just read — it writes continuously. Every workflow execution logs parameters, timestamps, output data, and error states to the execution_entity table. Every credential check, every webhook receipt, every scheduled trigger generates write operations. The I/O pattern is dominated by 4K random writes, not sequential reads.

Docker adds another layer. n8n runs inside containers that maintain persistent volumes, write overlay filesystem layers, and generate logging I/O. If you’re running n8n alongside PostgreSQL (which most self-hosted setups do), you have two containerized services competing for disk writes on the same volume.

The CPU profile is equally different. Websites spike and idle. Automation stacks sustain. A production n8n instance processing business workflows doesn’t burst to 90% for milliseconds then drop to zero. It runs at 30–60% utilization around the clock, with spikes during heavy execution batches. The “burstable” CPU model that works brilliantly for WordPress — where you borrow compute for a fraction of a second and return it — actively penalizes automation workloads that need consistent throughput over minutes and hours.

The Core Problem

VPS review sites test the workload pattern of a website visitor loading a page. n8n’s workload pattern is a database server processing transactions 24/7. These require fundamentally different infrastructure characteristics, and the reviews test only one of them.

What Actually Matters for Automation

If you’re evaluating VPS providers for an n8n deployment — or any persistent automation workload — here are the five criteria that VPS review sites consistently ignore. Every single one of these matters more than TTFB for your use case.

1. Sustained CPU Performance

Not burst benchmarks. Not “up to 3.5 GHz.” Sustained, continuous CPU throughput over minutes, not milliseconds. Most “burstable” VPS tiers give you a CPU credit system: you accumulate credits while idle and spend them during spikes. For a website, you never exhaust credits because each request uses a sliver of CPU. For n8n processing a queue of 500 webhook events, you burn through your credit pool in under a minute and then get throttled to a fraction of the advertised clock speed. Your workflows don’t fail — they slow down by 3–5x, and you won’t notice until execution timeouts start cascading.

2. Write IOPS

PostgreSQL execution history is write-heavy by design. Every completed workflow writes execution metadata. If you’re running 50 workflows with an average of 20 executions per hour, that’s 1,000 database writes per hour before counting index updates, WAL (Write-Ahead Log) entries, and VACUUM operations. Docker layer writes compound this — container logs, overlay filesystem updates, volume syncs. Review sites measure read throughput because that’s what WordPress needs. They almost never measure sustained 4K random write IOPS, which is what PostgreSQL under Docker load actually generates.

3. Memory Stability

Some providers advertise “4 GB RAM” but allocate from a shared pool with memory ballooning. Under pressure from neighboring tenants, your available memory contracts silently. The Linux OOM killer doesn’t politely ask n8n to reduce its memory footprint — it terminates processes. If it kills PostgreSQL mid-transaction, you lose execution data. If it kills the n8n worker process, in-flight workflows vanish without error logging. You need guaranteed, non-reclaimable memory. Not “up to 4 GB.” Exactly 4 GB, always.

4. Uptime During Maintenance

This is the metric almost every provider obscures. The SLA says 99.9% uptime — but read the fine print. Scheduled maintenance windows are typically excluded from SLA calculations. A provider can take your VPS offline for 30 minutes every month for host node patches and still claim 99.9% uptime. For a website, a 2 AM maintenance window is invisible to users. For n8n, a 30-minute outage at 2 AM means 30 minutes of missed cron triggers, dropped webhooks, and broken workflow chains. Your automation doesn’t sleep. Infrastructure that supports live migration — moving your VM to a different physical host without downtime — eliminates this category of failure entirely.

5. Independent Storage Scaling

Execution history grows. n8n’s PostgreSQL database accumulates data linearly with workflow volume. After six months of production use, we routinely see databases at 5–15 GB for moderate workloads and 30–50 GB for heavy automation. Most VPS providers bundle resources: to get more storage, you must upgrade to a tier that also increases CPU and RAM. This forces you to pay for compute you don’t need just because your execution history outgrew the disk. Independent resource scaling — adding 50 GB of storage without touching your CPU or memory allocation — is the difference between a $9/month storage upgrade and a $30/month tier jump.

A Fair Evaluation Framework

We’re not proposing anything radical here. We’re proposing that if you’re evaluating a VPS for automation, you test it under automation conditions. Here are concrete benchmarks you can run on any provider’s trial or smallest tier before committing to production.

Sustained CPU: sysbench for 60 Seconds

Most review benchmarks run CPU tests for 5–10 seconds. That’s long enough to capture burst performance and short enough to never trigger throttling. Run it for 60 seconds instead:

sysbench cpu --cpu-max-prime=20000 --time=60 --threads=2 run

Watch the events-per-second output at 10-second intervals. On dedicated CPU, it stays flat. On burstable CPU, you’ll see it decline by 20–40% after the first 15–20 seconds as your burst credits deplete. If the number drops, your n8n workflows will experience the same slowdown under sustained load.

PostgreSQL Write Performance: pgbench

This is the single most important benchmark for n8n infrastructure, and almost no review site runs it. Initialize a test database and run a write-heavy TPC-B workload:

pgbench -i -s 10 testdb
pgbench -c 10 -j 2 -T 300 testdb

Ten concurrent clients, two threads, five minutes of continuous transactions. This simulates the sustained write pressure of a production n8n instance with moderate workflow volume. Watch the TPS (transactions per second) output. If TPS drops below 200 after the first 60 seconds, you’re on burstable I/O or burstable CPU — either way, your execution history writes will bottleneck under real load. A healthy dedicated VPS with NVMe storage should sustain 400+ TPS for the full duration without degradation.

I/O Under Docker Load

Run fio to measure random write IOPS while a Docker build is running in the background. This simulates the real contention pattern — PostgreSQL writing execution data while Docker writes container layers:

# Terminal 1: Docker build (any multi-stage build)
docker build -t test-build .

# Terminal 2: Simultaneous I/O test
fio --name=randwrite --ioengine=libaio --rw=randwrite \
    --bs=4k --numjobs=4 --size=256M --runtime=60 \
    --group_reporting --direct=1

On shared storage, IOPS collapses during concurrent Docker builds. On dedicated NVMe, it stays within 10–15% of baseline. The difference determines whether your workflow executions stall when you deploy an updated n8n version.

Uptime Including Maintenance

This one you can’t benchmark in a trial period — you have to ask directly. Send a pre-sales ticket to any provider you’re evaluating: “Does your SLA exclude scheduled maintenance windows? How do you handle host node kernel patches — reboot with downtime, or live migration?” The answer tells you everything about whether your automation will survive the provider’s operational calendar. Providers that support live migration will say so clearly. Providers that don’t will talk about “minimal downtime” and “advance notification.”

The Quick Test

Run pgbench -c 10 -j 2 -T 300 on your VPS. If TPS drops below 200 after 60 seconds, you have burstable resources that will throttle your automation under production load. This single test tells you more about n8n suitability than any TTFB benchmark.

What We See from the Datacenter Side

We operate the infrastructure that n8n instances run on. We see the resource consumption patterns at the hypervisor level, across hundreds of workloads, and the data tells a clear story about why the website-hosting mental model fails for automation.

Utilization Patterns

Web hosting workloads on our platform show a characteristic sawtooth pattern: CPU spikes to 70–90% for 50–200 milliseconds per request, then drops to near zero between requests. Average utilization over a 24-hour period is typically 3–8%. Automation workloads look completely different. n8n instances with active workflows consistently use 40–60% of allocated CPU around the clock, with periodic spikes to 80–95% during heavy execution batches. The “idle” periods that make burstable pricing work for websites simply don’t exist for automation.

This is why burstable VPS tiers are genuinely a bad deal for n8n, even when the base price looks attractive. You’re paying for a compute model designed around the assumption that your workload is idle most of the time. Automation workloads violate that assumption by design.

Storage I/O Patterns

n8n with PostgreSQL generates constant 4K random writes. This is the opposite of web hosting’s sequential read pattern, and it exposes infrastructure choices that are invisible under read-heavy benchmarks. We see automation workloads generating 50–200 write IOPS sustained, with spikes to 500+ during execution bursts. On shared spinning-disk storage (still common at budget providers), these writes queue behind other tenants’ I/O and latency climbs from sub-millisecond to 10–50ms. On NVMe with dedicated IOPS allocation, write latency stays below 0.5ms regardless of platform load.

The practical impact: a workflow that processes 100 webhook events in sequence takes 8 seconds on NVMe and 45 seconds on congested shared storage. Same CPU, same RAM, same code. The difference is entirely storage I/O.

How This Influenced Our Infrastructure

These patterns are why we built our VPS and Dedicated VPS tiers the way we did. Ceph distributed storage is tuned for small random write workloads — the write pattern that PostgreSQL and Docker actually generate, not the read pattern that web hosting benchmarks test. Proxmox HA clustering provides live migration for always-on workloads, so host node maintenance doesn’t become an automation outage. Independent resource scaling exists specifically because automation resources don’t grow in bundles — your storage grows with execution history while CPU stays flat, so forcing a full-tier upgrade to get more disk space is wasteful.

We want to be honest about this: for a personal blog or portfolio site, any $5/month VPS with good TTFB scores is genuinely fine. The review sites are right for that use case. The problem is when those same rankings get applied to workloads with completely different infrastructure requirements — and automation is the clearest example of that mismatch.

Infrastructure Built for Automation Workloads

Dedicated CPU, NVMe storage, Proxmox HA failover, Ceph 3x replication.

Recommended: 2 vCPU / 4 GB RAM / 64 GB SSD — $9.58/mo
Configure Your VPS →