Every monitoring SaaS charges you per monitor. Five monitors free, then $7/month for 20, $29/month for 100. By the time you're monitoring 50 endpoints — websites, APIs, databases, mail servers — you're paying hundreds per year for what amounts to automated HTTP requests and ping commands. Uptime Kuma is a self-hosted, open-source monitoring tool that gives you unlimited monitors, unlimited notifications, customizable status pages, and complete data ownership — all running on a single lightweight container.
But there's one critical rule that most guides skip over: you must deploy your monitoring on a separate server from what it monitors. If the server running your application goes down, and your monitoring tool is on that same server, you'll never get the alert. This is the single most important architectural decision in self-hosted monitoring, and we'll address it thoroughly in this guide.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Self-Host Your Monitoring
Beyond cost savings, self-hosting your monitoring tool gives you several advantages that SaaS solutions cannot match:
- Unlimited monitors — no artificial limits. Monitor 5 endpoints or 500, the cost is the same.
- No data sharing — your uptime data, response times, and infrastructure topology stay on your server. SaaS providers know every URL and IP you monitor.
- Custom check intervals — most free tiers restrict you to 5-minute intervals. With Uptime Kuma, you can check every 20 seconds if you need to.
- Internal monitoring — monitor private IPs, internal services, and database connections that external SaaS tools can't reach.
- No vendor lock-in — your monitoring configuration, history, and status pages are yours. Export everything, migrate anytime.
Uptime Kuma specifically stands out because of its clean, modern interface and its extensive notification options — over 90 notification services supported, from email and Slack to Telegram, Discord, PagerDuty, and custom webhooks.
The Golden Rule: Deploy on a Separate Server
This cannot be emphasized enough. If you take away one thing from this entire guide, let it be this:
Your monitoring server must be independent of the infrastructure it monitors. If your application server goes down and takes your monitoring with it, you've built an alarm system that fails exactly when you need it most.
The correct architecture:
- Server A — your application server (web apps, databases, APIs)
- Server B — your Uptime Kuma instance (separate VPS, ideally in a different datacenter)
Server B monitors Server A from the outside. When Server A becomes unreachable, Server B detects the failure and sends you an alert through external channels (email, Telegram, webhook) that don't depend on Server A.
A dedicated monitoring VPS costs under $5/month and solves this fundamental problem completely. It's one of the highest-value investments in your infrastructure.
Dedicated monitoring instance: Deploy Uptime Kuma on a separate Cloud VPS — a dedicated monitoring instance costs under $5/mo and solves the fundamental problem of monitoring independence. Choose a different datacenter location from your application servers for maximum resilience.
Prerequisites
On your separate monitoring VPS (not your application server), you need:
- Ubuntu 24.04 LTS — see our Ubuntu VPS setup guide
- Docker and Docker Compose installed — follow our Docker installation guide
- A domain or subdomain pointed to this VPS (e.g.,
status.example.com) - Basic security hardening applied — see our security hardening guide
Verify Docker is operational:
docker --version
docker compose version
sudo systemctl status docker
Docker Compose Setup for Uptime Kuma
Create a directory for the Uptime Kuma deployment:
sudo mkdir -p /opt/uptime-kuma
sudo nano /opt/uptime-kuma/docker-compose.yml
Add the following Docker Compose configuration:
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: always
ports:
- "127.0.0.1:3001:3001"
volumes:
- uptime-kuma-data:/app/data
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- TZ=UTC
healthcheck:
test: ["CMD-SHELL", "node -e \"const http = require('http'); const options = { host: 'localhost', port: 3001, timeout: 2000 }; const req = http.request(options, (res) => { process.exit(res.statusCode === 200 ? 0 : 1) }); req.on('error', () => process.exit(1)); req.end();\""]
interval: 60s
timeout: 10s
retries: 3
start_period: 30s
volumes:
uptime-kuma-data:
driver: local
Key configuration details:
127.0.0.1:3001:3001— binds to localhost only (Nginx will handle external access)/var/run/docker.sock:/var/run/docker.sock:ro— read-only Docker socket access, enables Docker container monitoring on the same host- The
healthcheckensures Docker restarts the container if the Uptime Kuma process becomes unresponsive - The image tag
:1tracks the latest stable v1.x release
Deploy the container:
cd /opt/uptime-kuma
sudo docker compose up -d
Verify it's running:
sudo docker compose logs -f uptime-kuma
You should see output indicating the server is running on port 3001. Press Ctrl+C to exit the log stream.
Test local access:
curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:3001
A response of 200 confirms Uptime Kuma is running correctly.
Nginx Reverse Proxy with SSL and WebSocket Support
Uptime Kuma uses WebSockets extensively for its real-time dashboard updates. The Nginx configuration must handle WebSocket connections properly. If you don't have Nginx installed, follow our Nginx reverse proxy guide first.
Create the Nginx configuration:
sudo nano /etc/nginx/sites-available/status.example.com
Add this configuration (replace status.example.com with your domain):
upstream uptime_kuma {
server 127.0.0.1:3001;
keepalive 64;
}
server {
listen 80;
server_name status.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name status.example.com;
ssl_certificate /etc/letsencrypt/live/status.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/status.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://uptime_kuma;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (critical for Uptime Kuma)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Timeouts for WebSocket connections
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
# Buffering settings
proxy_buffering off;
proxy_cache off;
}
}
# WebSocket connection upgrade map (place outside server block,
# or in /etc/nginx/conf.d/websocket.conf)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
The map directive at the bottom should technically be placed in the http context, not inside a server block. The cleanest approach is to put it in a separate file:
sudo nano /etc/nginx/conf.d/websocket-upgrade.conf
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
Then remove the map block from the site configuration. Enable the site and obtain the SSL certificate (see our Let's Encrypt guide for details):
sudo ln -s /etc/nginx/sites-available/status.example.com /etc/nginx/sites-enabled/
sudo certbot --nginx -d status.example.com
sudo nginx -t
sudo systemctl reload nginx
Access Uptime Kuma at https://status.example.com.
Initial Setup — Admin Account and Two-Factor Authentication
On first access, Uptime Kuma presents a setup screen. Create your admin account:
- Enter a username
- Enter a strong password (use a password manager — this protects access to your entire monitoring infrastructure)
- Click "Create"
Immediately after logging in, enable two-factor authentication:
- Click the user icon (top-right) → Settings → Security
- Click "Enable 2FA"
- Scan the QR code with your authenticator app (Google Authenticator, Authy, or Bitwarden)
- Enter the verification code to confirm
- Save the backup recovery tokens in a secure location
Important: Uptime Kuma does not support multiple user accounts in the same way as other tools. It's a single-user application — one admin account manages everything. If multiple people need read-only access, use the public status page feature described later.
Adding Monitors
Click the "Add New Monitor" button in the dashboard. Uptime Kuma supports numerous monitor types. Here are the most commonly used ones:
HTTP(S) Monitor
The most common monitor type. Configure it like this:
- Monitor Type: HTTP(s)
- Friendly Name: Production Website
- URL:
https://www.example.com - Heartbeat Interval: 60 seconds (how often to check)
- Retries: 3 (avoid false positives from transient network issues)
- Accepted Status Codes: 200-299 (default)
Advanced HTTP options include:
- Max Redirects: how many 301/302 redirects to follow
- Certificate Expiry Notification: alert X days before SSL expires
- Authentication: basic auth or custom headers for protected endpoints
- Request Body: for monitoring POST endpoints or APIs
Keyword Monitor
Goes beyond status code checking — verifies that specific content appears on the page:
- Monitor Type: HTTP(s) - Keyword
- URL:
https://www.example.com - Keyword:
Welcome to Example
This catches situations where a server returns 200 OK but displays an error page, a maintenance page, or content from a misconfigured reverse proxy.
TCP Port Monitor
Checks whether a TCP port is accepting connections:
- Monitor Type: TCP Port
- Hostname:
db.example.com - Port:
3306
Use this for monitoring databases (MySQL 3306, PostgreSQL 5432), mail servers (SMTP 587, IMAP 993), SSH (22), and other non-HTTP services.
Ping Monitor
ICMP ping to verify a host is reachable:
- Monitor Type: Ping
- Hostname:
203.0.113.10
Useful as a baseline connectivity check. If ping fails, the entire server is likely down.
DNS Monitor
Verifies DNS resolution returns expected results:
- Monitor Type: DNS
- Hostname:
example.com - DNS Resolver Server:
1.1.1.1 - Record Type: A
DNS monitoring catches domain expiry issues, DNS hijacking, and propagation problems that HTTP monitors alone cannot detect.
Docker Container Monitor
If you have Docker containers on the same host as Uptime Kuma (the monitoring server itself), you can monitor their status directly:
- Monitor Type: Docker Container
- Container Name / ID:
uptime-kuma - Docker Host: (auto-detected via mounted socket)
This is useful for monitoring containers running on the monitoring VPS itself, but remember — your main application containers are on a different server and should be monitored via HTTP, TCP, or ping.
Recommended Monitor Strategy
For a typical web application, set up this monitor stack:
| Monitor | Type | Interval | What It Catches |
|---|---|---|---|
| Website homepage | HTTP(s) Keyword | 60s | Website down, wrong content, SSL issues |
| API health endpoint | HTTP(s) | 30s | API failures, backend crashes |
| Server ping | Ping | 60s | Complete server outage, network failure |
| SSH port | TCP (port 22) | 120s | SSH daemon crash, firewall misconfiguration |
| Database port | TCP (port 5432) | 60s | Database server crash or overload |
| DNS resolution | DNS | 300s | DNS expiry, hijacking, propagation issues |
| SSL certificate | HTTP(s) | 3600s | Certificate expiry (set alert at 14 days) |
Configuring Notifications
Monitors are useless without notifications. Navigate to Settings → Notifications (or set them up per-monitor). Uptime Kuma supports over 90 notification services. Here are the most practical ones:
SMTP Email Notifications
The most universal notification method. Go to Settings → Notifications → Setup Notification:
- Notification Type: SMTP
- SMTP Host:
smtp.gmail.com(or your mail provider) - SMTP Port:
587 - Security: STARTTLS
- Username: your email address
- Password: app-specific password (not your account password)
- From Email:
monitoring@example.com - To Email:
admin@example.com
For production monitoring, use a transactional email service (Mailgun, SendGrid, Amazon SES) rather than a personal Gmail account. If you're running your own mail server, see our Postfix and Dovecot guide.
Telegram Notifications
Telegram is excellent for monitoring alerts — instant delivery, works on mobile, supports groups:
- Message
@BotFatheron Telegram, create a new bot, and save the bot token - Message your new bot, then visit
https://api.telegram.org/bot<YOUR_TOKEN>/getUpdatesto find your chat ID - In Uptime Kuma notification settings:
- Notification Type: Telegram
- Bot Token: your bot token
- Chat ID: your chat ID
Click "Test" to send a test message. You should receive it instantly on Telegram.
Discord Webhook
For teams already using Discord:
- In your Discord server, go to Channel Settings → Integrations → Webhooks → New Webhook
- Copy the webhook URL
- In Uptime Kuma:
- Notification Type: Discord
- Webhook URL: paste the URL
- Bot Display Name: Uptime Kuma
Slack Webhook
Similar to Discord:
- Create an Incoming Webhook in your Slack workspace (via Slack App Directory)
- In Uptime Kuma:
- Notification Type: Slack
- Webhook URL: your Slack webhook URL
- Channel:
#monitoring
Generic Webhook
For custom integrations, the webhook notification type sends a JSON payload to any URL:
- Notification Type: Webhook
- URL:
https://your-api.example.com/webhook/uptime - Request Body: Uptime Kuma sends a JSON payload with monitor details, status, and message
The webhook payload includes the monitor name, status (up/down), message, timestamp, and previous status — everything you need to build custom alerting workflows.
Notification Best Practices
Configure at least two notification channels (e.g., email + Telegram). If one fails (email server down, Telegram API issue), the other still delivers. Set the "Default enabled" toggle so every new monitor automatically uses your notification channels.
Creating Public Status Pages
Uptime Kuma's status page feature lets you create professional, public-facing status dashboards for your clients or users. Navigate to Status Pages → New Status Page.
Configure your status page:
- Name: "Example Service Status"
- Slug:
example(accessible athttps://status.example.com/status/example) - Description: A brief note about the page
- Add groups — organize monitors into logical categories (e.g., "Website," "API," "Email")
- Add monitors to groups — drag monitors from the sidebar into groups
- Show/hide powered by — toggle the "Powered by Uptime Kuma" footer
Status pages are fully public and don't require authentication, making them ideal for:
- Client communication during incidents
- Public transparency for SaaS products
- Internal team dashboards for service health
You can create multiple status pages with different monitors for different audiences — a public page showing customer-facing services and an internal page showing infrastructure components.
Custom Domain for Status Page
If you want the status page on its own domain (e.g., status.example.com shows the status page directly), set the status page slug as the "Entry Page" in Uptime Kuma settings. This makes the status page the default landing page instead of the login screen.
Monitoring Intervals and Alert Fatigue Management
One of the biggest problems with self-hosted monitoring is alert fatigue — getting so many notifications that you start ignoring them. Here's how to configure Uptime Kuma to send meaningful alerts.
Choosing Check Intervals
Shorter intervals detect outages faster but increase resource usage and can trigger false positives from momentary network blips:
| Service Type | Recommended Interval | Rationale |
|---|---|---|
| Critical website/API | 30-60 seconds | Fast detection for customer-facing services |
| Internal tools | 60-120 seconds | Important but less time-sensitive |
| DNS records | 300-600 seconds | DNS changes are rare; frequent checks waste resources |
| SSL certificate expiry | 3600+ seconds | Only needs daily or hourly checks |
| Backup server connectivity | 120-300 seconds | Non-critical but worth monitoring |
Retry Configuration
Always set retries to at least 2-3. A single failed check often means a transient network issue, not an actual outage. With 3 retries and a 60-second interval, Uptime Kuma will:
- Detect the first failure at T+0
- Retry at T+60
- Retry at T+120
- Retry at T+180
- Only alert after the 3rd consecutive failure (at T+180)
This means you'll know about real outages within 3 minutes while filtering out transient blips.
Resend Interval
Set the "Resend Notification" interval to avoid repeated alerts for the same ongoing outage. A resend interval of 0 means "notify once" — you get one alert when the service goes down and one when it comes back up. Set it to a non-zero value (e.g., 300 seconds) if you want periodic reminders during prolonged outages.
Resource consideration: Monitoring 50+ endpoints with 20-second intervals becomes resource-intensive. Each check involves DNS resolution, TCP connection, SSL handshake, and HTTP request processing. Dedicated VPS resources ensure consistent monitoring performance without competing with other tenants for CPU cycles.
Monitor Groups and Tags
As your monitor count grows, organization becomes essential. Uptime Kuma provides two organizational tools:
Tags
Tags are color-coded labels you can assign to monitors. Create tags for:
- Environment: production (red), staging (yellow), development (green)
- Client: client-a (blue), client-b (purple)
- Service type: website, api, database, mail
Tags help you filter the dashboard and quickly identify which monitors belong to which category.
Groups
On the dashboard, drag monitors to reorder them and use groups (on status pages) to organize monitors logically. The main dashboard sorts by name by default, but you can reorder manually for priority-based viewing.
Maintenance Windows
When performing planned maintenance, use Uptime Kuma's maintenance feature to suppress alerts and show maintenance status on public status pages:
- Go to Maintenance → Add Maintenance
- Set the title (e.g., "Scheduled Database Migration")
- Choose the schedule: one-time, recurring daily, or cron expression
- Select affected monitors
- Set start and end times
During active maintenance windows, affected monitors won't trigger notifications, and their status pages will display a maintenance banner instead of showing them as "down."
Backing Up Uptime Kuma Data
Uptime Kuma stores all its data in a SQLite database inside the Docker volume. This includes all monitor configurations, notification settings, historical uptime data, and status page definitions.
Manual Backup
# Stop the container briefly
cd /opt/uptime-kuma
sudo docker compose stop uptime-kuma
# Back up the data volume
sudo docker run --rm \
-v uptime-kuma_uptime-kuma-data:/data \
-v /opt/backups:/backup \
ubuntu:24.04 \
tar czf /backup/uptime-kuma-backup-$(date +%Y%m%d-%H%M).tar.gz -C /data .
# Restart
sudo docker compose start uptime-kuma
Automated Backup Script
Create a cron-based backup script:
sudo nano /opt/uptime-kuma/backup.sh
#!/bin/bash
BACKUP_DIR="/opt/backups/uptime-kuma"
TIMESTAMP=$(date +%Y%m%d-%H%M)
RETENTION_DAYS=30
mkdir -p "$BACKUP_DIR"
# Copy the SQLite database without stopping the container
# SQLite handles concurrent reads, so a file copy during operation is safe
# for backup purposes (Uptime Kuma uses WAL mode)
sudo docker cp uptime-kuma:/app/data/kuma.db "$BACKUP_DIR/kuma-$TIMESTAMP.db"
# Compress
gzip "$BACKUP_DIR/kuma-$TIMESTAMP.db"
# Remove backups older than retention period
find "$BACKUP_DIR" -name "kuma-*.db.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: kuma-$TIMESTAMP.db.gz"
sudo chmod +x /opt/uptime-kuma/backup.sh
Add to cron for daily backups:
sudo crontab -e
Add this line:
0 3 * * * /opt/uptime-kuma/backup.sh >> /var/log/uptime-kuma-backup.log 2>&1
This runs daily at 3:00 AM and keeps 30 days of backups. For more comprehensive backup strategies, see our automatic backup guide.
Restoring from Backup
cd /opt/uptime-kuma
sudo docker compose stop uptime-kuma
# Extract the backup database
gunzip -k /opt/backups/uptime-kuma/kuma-20260228-0300.db.gz
# Copy into the volume
sudo docker cp /opt/backups/uptime-kuma/kuma-20260228-0300.db uptime-kuma:/app/data/kuma.db
sudo docker compose start uptime-kuma
Updating Uptime Kuma
Uptime Kuma is actively developed with regular updates. To update:
cd /opt/uptime-kuma
sudo docker compose pull
sudo docker compose up -d
Check the update was successful:
sudo docker compose logs --tail 20 uptime-kuma
Your monitors, notifications, and all configuration persist through updates because they're stored in the Docker volume.
Advanced Configuration
Monitoring API Endpoints with Custom Headers
For API monitoring that requires authentication, create an HTTP(s) monitor with custom headers. In the monitor configuration, expand "Advanced" and add headers:
{
"Authorization": "Bearer your-api-token-here",
"Content-Type": "application/json"
}
You can also set the HTTP method to POST and include a request body for monitoring endpoints that require specific payloads.
Monitoring Internal Services via TCP
For databases and internal services that aren't exposed via HTTP, use TCP monitors. Common ports to monitor:
MySQL/MariaDB: 3306
PostgreSQL: 5432
Redis: 6379
MongoDB: 27017
SMTP: 25, 587
IMAP: 143, 993
SSH: 22
TCP monitors simply verify the port accepts connections. They don't check the service's health in depth, but they catch crashes, resource exhaustion (too many connections), and firewall issues.
JSON Query Monitor
For APIs that return JSON, you can monitor specific values in the response. Create an HTTP monitor and configure:
- Monitor Type: HTTP(s) - JSON Query
- URL:
https://api.example.com/health - JSON Query:
$.status - Expected Value:
ok
This checks that the API returns {"status": "ok"} — catching situations where the server responds with 200 but reports an unhealthy internal state.
MassiveGRID Handles Infrastructure. You Handle Application Monitoring.
There's an important distinction between infrastructure monitoring and application monitoring:
- Infrastructure monitoring — is the server up? Is the CPU overloaded? Is the disk full? Is the network connection healthy?
- Application monitoring — is my website returning the right content? Is my API responding within acceptable latency? Are my SSL certificates valid?
MassiveGRID's managed hosting platform handles infrastructure monitoring for you — server health, hardware failures, network issues, and automatic failover are all managed by the MassiveGRID team 24/7. Uptime Kuma complements this by monitoring your applications from the user's perspective: is the website accessible, is the API responding correctly, is the content what you expect.
Together, you get complete coverage: MassiveGRID ensures the infrastructure never goes down, and Uptime Kuma ensures your applications running on that infrastructure are functioning correctly.
Troubleshooting
WebSocket Connection Fails (Dashboard Doesn't Update in Real Time)
This is almost always an Nginx configuration issue. Verify your Nginx config includes the WebSocket headers:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
Also ensure the map directive for $connection_upgrade is present in the http context (not inside a server block).
False Positives — Monitors Flapping Up/Down
If monitors intermittently show as down then immediately recover:
- Increase the retry count to 3-5
- Increase the heartbeat interval (30s minimum)
- Check if the target server has rate limiting or firewall rules that block frequent requests from a single IP
- Verify the monitoring VPS has a stable network connection
High Memory Usage with Many Monitors
Uptime Kuma stores historical data in SQLite. With hundreds of monitors running for months, the database can grow large. Uptime Kuma automatically trims data older than its configured retention period (default 180 days). You can reduce this in Settings → General → Keep Data Period if memory is constrained.
Cannot Resolve Internal Hostnames
If Uptime Kuma cannot resolve hostnames of services on other servers, it's using Docker's internal DNS. You can add custom DNS servers in the Docker Compose file:
services:
uptime-kuma:
image: louislam/uptime-kuma:1
dns:
- 1.1.1.1
- 8.8.8.8
# ... rest of config
Summary
Uptime Kuma gives you a powerful, self-hosted monitoring platform that rivals paid services costing $30-100/month. The key takeaways:
- Always deploy on a separate server — this is the most important architectural decision
- Use Docker Compose for reproducible deployments
- Secure with Nginx, SSL, and 2FA
- Set up at least two notification channels for redundancy
- Configure retries and intervals to prevent alert fatigue
- Create public status pages for client-facing transparency
- Automate backups of the SQLite database
For server-level monitoring (CPU, memory, disk metrics, process monitoring), complement Uptime Kuma with a dedicated monitoring stack — see our VPS monitoring setup guide for Prometheus, Grafana, and node_exporter configuration. Uptime Kuma answers "is it up?" while system monitoring answers "how is it performing?"