If you run n8n in production, silent workflow failures are one of the biggest risks. A webhook can be unreachable, a scheduled cron can stop firing, and a queue worker can be starved of memory long before anyone notices. Uptime Kuma is a self-hosted, open-source status page and monitor that pairs beautifully with n8n. In this guide we configure Uptime Kuma to watch every layer of an n8n deployment, from the public webhook URL down to the worker node's internal metrics endpoint.

Why Uptime Kuma for n8n

Uptime Kuma gives you HTTP(S), keyword, JSON, TCP, DNS, push, and gRPC monitors out of the box, plus a public status page, incident history, and alerting to Slack, email, Telegram, Matrix, webhooks, and more. It runs in a single Docker container and stores data in SQLite, which keeps operational overhead minimal. Combined with n8n's queue mode and webhook architecture, you can build a monitoring layer that catches 99 percent of real-world failures.

What to Monitor on an n8n Instance

A healthy n8n production instance has several components. Each one needs its own monitor:

Installing Uptime Kuma

Run Uptime Kuma on a separate host from n8n, ideally in a different data center, so a full n8n outage does not also take down your monitoring. A minimal Docker Compose file:

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "3001:3001"
    volumes:
      - ./data:/app/data
    environment:
      - UPTIME_KUMA_DISABLE_FRAME=false

Put it behind NGINX with TLS, create the admin user on first load, and you are ready to add monitors.

Monitor 1: Public Web UI and Healthcheck

Add an HTTP(s) monitor pointing at https://n8n.example.com/healthz with a 30 second interval and a 10 second timeout. Set the "Keyword" type and use the value ok so a 200 response with an unexpected body still alerts. Turn on the "Certificate Expiry Notification" option to get warnings 14, 7, and 1 day before TLS expiry.

Monitor 2: Webhook Round Trip

Unit tests for webhooks are the single highest-leverage check. Create an n8n workflow triggered by a webhook that immediately responds with a fixed payload. In Uptime Kuma add an HTTP(s)-JSON monitor that POSTs a known body to that webhook and asserts on the JSON response. This proves the webhook entry point, the queue, at least one worker, and the response path are all functioning.

URL:     https://n8n.example.com/webhook/kuma-health
Method:  POST
Body:    {"ping":"kuma"}
Expect:  $.pong == "kuma"

Monitor 3: Scheduled Workflow Heartbeat (Push Monitor)

Uptime Kuma supports a "Push" monitor type. You give it a URL and Kuma expects a GET or POST to that URL at a defined interval. Inside n8n, create a scheduled workflow that runs every minute and ends with an HTTP Request node calling that Push URL. If the n8n scheduler stalls, if the worker crashes, or if the host loses network, the push stops and Kuma raises an incident. This catches silent failures that HTTP checks cannot.

Monitor 4: Database and Queue

Add TCP monitors for PostgreSQL and Redis. If your topology uses private networking, run an Uptime Kuma agent on the same subnet or add a lightweight bastion. A TCP monitor with a 60 second interval is enough to catch most outages without generating noise.

Monitor 5: External Dependencies

For every third party API your workflows call, add a keyword or JSON monitor hitting a lightweight public endpoint. You will often discover that your workflow's true SLA is the floor of all its dependencies, and Uptime Kuma's status page makes that fact visible.

Alerting Strategy

Route alerts by severity:

Deduplicate with the "Resend notification" and "Retries" settings so a single five-second blip does not wake anyone up.

Public Status Page

Uptime Kuma can publish a public status page that groups monitors logically. For n8n we recommend grouping by: Core Platform (UI, webhooks, scheduler), Dependencies (database, queue), and Integrations (each external API). Share the URL with internal stakeholders so non-technical teams can self-serve when something looks slow.

Hosting the Monitoring Stack

Run Uptime Kuma on a small, isolated cloud server in a different region from your n8n nodes. If your n8n deployment is multi-region, consider running two Uptime Kuma instances in active-passive mode. For n8n itself, see our managed n8n hosting for production-ready infrastructure, and read our companion guide on GDPR-compliant EU n8n hosting.

Need help designing an end-to-end observability stack around n8n? Contact us for an architecture review.

Published by the MassiveGRID team, specialists in n8n workflow automation and self-hosted observability.