Every production server needs monitoring. Whether you are running a SaaS application, an eCommerce store, or a collection of self-hosted services, you need to know the moment something goes down. Uptime Kuma is an open-source, self-hosted monitoring tool that gives you exactly that — without handing your infrastructure data to a third-party SaaS provider.
Coolify makes deploying Uptime Kuma trivially simple. It is available directly from Coolify's one-click service catalog, which means you can go from zero to a fully functional monitoring dashboard in under five minutes. This guide covers the entire process: deploying Uptime Kuma through Coolify, configuring monitors for your services, setting up alert notifications, building status pages, and following best practices for production monitoring.
If you have not installed Coolify yet, start with our Coolify installation guide first. For hardening your Coolify server before deploying services, see our Coolify security hardening guide.
1. What Is Uptime Kuma?
Uptime Kuma is a self-hosted monitoring tool built with Node.js. It provides a clean, modern web interface for tracking the availability of your websites, APIs, TCP services, DNS records, and more. Think of it as a self-hosted alternative to services like UptimeRobot, Pingdom, or Better Uptime — except you own the data, control the infrastructure, and pay nothing beyond server costs.
Key features include:
- Multiple monitor types — HTTP(S), TCP, Ping/ICMP, DNS, Docker containers, Steam game servers, MQTT, and more.
- Notification channels — Over 90 notification services supported, including email, Slack, Discord, Telegram, webhooks, PagerDuty, and Gotify.
- Status pages — Public or password-protected status pages you can share with users or clients.
- Maintenance windows — Schedule planned downtime so monitors pause and alerts do not fire during maintenance.
- Certificate expiry monitoring — Automatically tracks SSL/TLS certificate expiration dates and alerts you before they expire.
- Multi-language support — Interface available in dozens of languages.
- Response time graphs — Historical uptime percentages and response time charts for each monitor.
2. Why Self-Host Your Monitoring?
Using a SaaS monitoring service introduces a fundamental problem: your monitoring depends on a third party. If the SaaS provider has an outage, you lose visibility into your own infrastructure at the worst possible time. Self-hosting your monitoring eliminates this single point of failure.
Here is why self-hosted monitoring makes sense for most teams:
- No external dependencies — Your monitoring does not go down because a SaaS vendor has an incident. Your uptime checks run on infrastructure you control.
- Cost efficiency — SaaS monitoring tools charge per monitor, per check interval, or per notification. Uptime Kuma is completely free. A Cloud VPS starting at $1.99/mo can monitor hundreds of endpoints.
- Data privacy — Monitoring data reveals your infrastructure topology, endpoints, response times, and availability patterns. Self-hosting keeps all of this private.
- No monitor limits — SaaS tools cap the number of monitors on free and lower tiers. Uptime Kuma has no artificial limits.
- Faster check intervals — Most SaaS tools restrict check intervals to 1-5 minutes on lower plans. Uptime Kuma supports intervals as low as 20 seconds.
- Internal network monitoring — Self-hosting lets you monitor internal services, private IPs, and localhost endpoints that external SaaS tools cannot reach.
Best practice: Host your monitoring on a separate server from the services being monitored. If the application server goes down, your monitoring server remains operational and can alert you. A lightweight Cloud VPS or Dedicated VPS dedicated to monitoring is a small investment for significant operational visibility.
3. Prerequisites
Before deploying Uptime Kuma, you need:
- A VPS with Coolify installed — Follow our Coolify installation guide if you have not done this yet. A server with 1 vCPU and 1 GB RAM is sufficient for Coolify plus Uptime Kuma.
- A domain or subdomain — For example,
status.yourdomain.com. Point an A record to your server's IP address. - DNS propagation complete — Verify the A record resolves to your server before proceeding. Coolify uses the domain for automatic SSL certificate provisioning via Let's Encrypt.
Uptime Kuma is exceptionally lightweight. It uses an embedded SQLite database by default, so there is no external database to configure. Resource requirements are minimal:
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 256 MB | 512 MB |
| CPU | 0.5 vCPU | 1 vCPU |
| Disk | 1 GB | 5 GB (for historical data) |
These numbers make Uptime Kuma an ideal candidate for running alongside other services on your Coolify server, or on a dedicated lightweight VPS for independent monitoring.
4. Deploying Uptime Kuma Through Coolify
Coolify includes Uptime Kuma in its one-click service marketplace. This means you do not need to write a Docker Compose file, configure ports, or set up a reverse proxy manually. Coolify handles the container, networking, domain routing, and SSL certificate.
Step 1: Open the Coolify Dashboard
Navigate to your Coolify instance (e.g., https://coolify.yourdomain.com) and log in with your admin credentials.
Step 2: Create a New Project (Optional)
If you want to keep monitoring separate from your other deployments, create a dedicated project. Go to Projects and click New Project. Name it something like "Monitoring" or "Infrastructure Tools." Otherwise, you can deploy Uptime Kuma into an existing project.
Step 3: Add a New Resource
Inside your project, select the target environment (e.g., "Production") and click Add New Resource. Choose Service from the resource type options. This opens the Coolify service catalog.
Step 4: Select Uptime Kuma
Search for "Uptime Kuma" in the service catalog. Click on it to select it. Coolify will present the service configuration screen with sensible defaults already applied.
Step 5: Configure the Domain
In the service configuration, set the domain field to your prepared subdomain — for example, status.yourdomain.com. Coolify will automatically configure the reverse proxy (Traefik) to route traffic to the Uptime Kuma container and provision an SSL certificate via Let's Encrypt.
Step 6: Deploy
Click Deploy. Coolify will pull the Uptime Kuma Docker image, create the container with persistent volumes, configure networking, and start the service. Deployment typically completes within 30-60 seconds.
Once deployment finishes, navigate to your configured domain. You will see the Uptime Kuma setup screen where you create your admin account.
Step 7: Create Your Admin Account
On first access, Uptime Kuma prompts you to create an admin username and password. Choose a strong password — this account controls all your monitoring configuration and notification settings. There is no password recovery mechanism unless you access the SQLite database directly.
5. Configuring Monitors
After logging in, you land on an empty dashboard. Click Add New Monitor to start tracking your first service. Uptime Kuma supports a wide range of monitor types, each suited to different use cases.
HTTP(S) Monitors
HTTP monitors are the most common type. They send an HTTP request to a URL and check the response status code. A 2xx response means the service is up; anything else (or a timeout) triggers a down state.
Configure an HTTP monitor with these settings:
- URL — The full URL to check, e.g.,
https://yourdomain.comorhttps://api.yourdomain.com/health. - Heartbeat interval — How often to check. 60 seconds is a reasonable default; reduce to 20-30 seconds for critical services.
- Retries — Number of consecutive failures before marking as down. Setting this to 2-3 prevents false alerts from transient network blips.
- Accepted status codes — By default, 200-299. You can customize this if your endpoint returns 301 redirects or other expected codes.
- Keyword monitoring — Optionally check that the response body contains (or does not contain) a specific string. Useful for detecting partial failures where the page loads but shows an error message.
TCP Monitors
TCP monitors verify that a port is open and accepting connections. Use these for databases, mail servers, custom daemons, or any service that does not speak HTTP.
- Hostname — The IP or domain of the target host.
- Port — The TCP port to check (e.g., 5432 for PostgreSQL, 3306 for MySQL, 6379 for Redis).
TCP monitors are particularly useful for monitoring the health of databases, message queues, and other backend services that your Coolify applications depend on.
Ping (ICMP) Monitors
Ping monitors send ICMP echo requests to check basic network reachability. These are useful for monitoring whether a server or network device is online at the network level, independent of any specific service running on it.
DNS Monitors
DNS monitors query a DNS server and verify the response. You can check that a domain resolves to an expected IP address, that MX records are correct, or that a CNAME record is properly configured. This is valuable for catching DNS misconfigurations or propagation issues before they affect users.
Docker Container Monitors
If Uptime Kuma has access to the Docker socket, it can monitor container health directly. This is especially useful in a Coolify environment where all your applications run as Docker containers. You can detect containers that are stopped, restarting, or in an unhealthy state without relying on HTTP endpoints.
Organizing Monitors with Tags and Groups
As your monitor count grows, organization becomes essential. Uptime Kuma supports:
- Tags — Color-coded labels you can apply to any monitor. Create tags like "Production," "Staging," "Critical," or "Client-A" to categorize monitors.
- Groups — Nest monitors under collapsible group headers on the dashboard. Group by service type, environment, or client.
6. Setting Up Notification Channels
Monitoring is only useful if you get notified when something goes wrong. Uptime Kuma supports over 90 notification services out of the box. Here are the most commonly used channels and how to configure them.
Navigate to Settings > Notifications to add notification channels. Each channel can be tested with a "Test" button before saving.
Email (SMTP)
Configure an SMTP server to receive downtime alerts via email. You will need your SMTP host, port, username, password, and the sender/recipient email addresses. This works with any SMTP provider — Gmail, Amazon SES, Mailgun, or your own mail server.
Slack
Create an Incoming Webhook in your Slack workspace (via the Slack API dashboard) and paste the webhook URL into Uptime Kuma. Alerts will be posted as formatted messages in the channel you configure, complete with monitor name, status, and response time.
Discord
In your Discord server, go to Channel Settings > Integrations > Webhooks and create a new webhook. Copy the webhook URL into Uptime Kuma's Discord notification settings. Alerts appear as embedded messages in the channel.
Telegram
Create a bot via @BotFather on Telegram and obtain the bot token. Then get your chat ID (or group chat ID) and enter both values into Uptime Kuma. The bot will send alert messages directly to your Telegram chat.
Webhooks (Generic)
For custom integrations, use the generic webhook notification type. Uptime Kuma sends a JSON payload to your specified URL whenever a monitor changes state. This is ideal for integrating with incident management platforms, custom dashboards, or automation workflows like n8n.
Assigning Notifications to Monitors
After creating notification channels, you assign them to individual monitors. Each monitor can have multiple notification channels — for example, send critical service alerts to both Slack and email, while less critical monitors only notify via Discord. You can also set default notification channels that automatically apply to new monitors.
7. Building Status Pages
Uptime Kuma includes a built-in status page feature that lets you create public-facing (or password-protected) dashboards showing the real-time status of your services. This is valuable for communicating service health to users, clients, or internal teams.
To create a status page:
- Go to Status Pages in the Uptime Kuma sidebar and click New Status Page.
- Give it a name and a URL slug (e.g.,
/status). - Add monitor groups and assign monitors to each group. For example, create groups like "Web Services," "APIs," and "Databases."
- Customize the appearance — add a title, description, logo, and footer text.
- Optionally enable incident management to post incident updates directly on the status page.
Status pages are served directly from Uptime Kuma, so they share the same domain. You can configure a custom domain for the status page by setting up an additional DNS record and configuring it in Coolify's reverse proxy settings.
8. Persistent Data Storage
Uptime Kuma stores all its data — monitors, notification settings, uptime history, and status page configurations — in an embedded SQLite database. When deployed through Coolify's service catalog, persistent storage is automatically configured via Docker volumes.
This means your monitoring data survives container restarts, redeployments, and Uptime Kuma version upgrades. Coolify maps the /app/data directory inside the container to a persistent Docker volume on the host filesystem.
Backing Up Uptime Kuma Data
Since all data lives in a single SQLite file, backups are straightforward:
- Manual backup — Copy the SQLite database file from the Docker volume. You can find the volume path by inspecting the container in Coolify or via
docker volume inspect. - Automated backup — Set up a cron job on the host that copies the database file to an offsite location (S3, another server, etc.) on a regular schedule.
- Coolify backups — If you have configured Coolify's backup feature, the Uptime Kuma data volume will be included in server-level backups. See our Coolify backup strategy guide for details.
The SQLite database is typically small — a few megabytes even with hundreds of monitors and months of history. Uptime Kuma automatically cleans up old data based on the retention period you configure in settings.
9. Monitoring Your Coolify Applications
One of the most practical use cases for Uptime Kuma on a Coolify server is monitoring the other applications you have deployed with Coolify. Here is a recommended approach:
Monitor Each Application's Public URL
For every web application deployed through Coolify, create an HTTP(S) monitor pointing to its public URL. This verifies the full stack: DNS resolution, SSL certificate, reverse proxy routing, and the application itself.
Monitor Health Check Endpoints
If your applications expose health check endpoints (e.g., /api/health, /healthz), monitor those instead of (or in addition to) the homepage. Health endpoints typically verify database connectivity and other dependencies, giving you a more accurate picture of application health.
Monitor Databases and Background Services
Use TCP monitors to check that databases deployed through Coolify (PostgreSQL, MySQL, Redis) are accepting connections. If these services are only exposed on the internal Docker network, you can monitor them by their Docker service name since Uptime Kuma runs in the same Docker environment.
Example Monitor Setup for a Typical Coolify Deployment
| Monitor | Type | Target | Interval |
|---|---|---|---|
| Main Website | HTTP(S) | https://yourdomain.com |
60s |
| API Server | HTTP(S) | https://api.yourdomain.com/health |
30s |
| PostgreSQL | TCP | postgres-service:5432 |
60s |
| Redis Cache | TCP | redis-service:6379 |
60s |
| SSL Certificate | HTTP(S) | https://yourdomain.com |
Daily |
| DNS Resolution | DNS | yourdomain.com |
300s |
10. Best Practices for Production Monitoring
Running Uptime Kuma in production is straightforward, but a few practices will make your monitoring setup significantly more reliable and useful.
Separate Monitoring from Monitored Services
The most important best practice: do not host your monitoring on the same server as the services you are monitoring. If that server goes down, you lose both the service and the ability to detect the outage. Deploy Uptime Kuma on a separate Cloud VPS or Dedicated VPS — even a minimal instance with 256 MB RAM is sufficient. This way, if your application server fails, Uptime Kuma is still running and can alert you immediately.
Use Multiple Notification Channels
Do not rely on a single notification channel. If your only alert method is email and your mail server is having issues, you will miss critical alerts. Configure at least two independent channels — for example, Slack and Telegram, or email and a webhook to a mobile push notification service.
Set Appropriate Check Intervals
Not every service needs 20-second checks. Use shorter intervals (20-30 seconds) for revenue-critical services like payment endpoints or primary application URLs. Use longer intervals (5-10 minutes) for less critical services like documentation sites or internal tools. This reduces resource consumption and notification noise.
Configure Retry Thresholds
Set retries to 2-3 for most monitors. Single-check failures can result from transient network issues, garbage collection pauses, or brief container restarts during deployments. Requiring multiple consecutive failures before alerting eliminates most false positives.
Use Maintenance Windows
Before performing planned maintenance — OS updates, application deployments, database migrations — set up a maintenance window in Uptime Kuma. This prevents alert fatigue and keeps your uptime statistics accurate. Maintenance periods are excluded from uptime percentage calculations.
Monitor the Monitor
Consider setting up a basic external check on your Uptime Kuma instance itself. This can be as simple as a free-tier check from an external monitoring service pointed at your Uptime Kuma URL. If Uptime Kuma goes down, you will still get notified.
Review and Prune Regularly
As services are decommissioned or URLs change, update your monitors accordingly. Stale monitors create noise and erode trust in the monitoring system. Periodically review your monitor list and remove anything no longer relevant.
11. Uptime Kuma vs. SaaS Monitoring: Cost Comparison
To put the cost savings in perspective, here is a comparison of self-hosted Uptime Kuma against popular SaaS alternatives for a typical monitoring setup of 50 monitors:
| Solution | 50 Monitors | Check Interval | Monthly Cost |
|---|---|---|---|
| Uptime Kuma (self-hosted) | Unlimited | 20 seconds | $1.99 (VPS cost) |
| UptimeRobot (Pro) | 50 | 60 seconds | $7/mo |
| Better Uptime (Team) | 50+ | 30 seconds | $85/mo |
| Pingdom (Advanced) | 50+ | 60 seconds | $41.95/mo |
With Uptime Kuma on a MassiveGRID Cloud VPS, you get unlimited monitors at the shortest check interval for the cost of a basic server. The savings become even more significant as your monitoring needs grow beyond 50 endpoints.
Host Uptime Kuma on MassiveGRID
- Cloud VPS — From $1.99/mo. Lightweight enough for Uptime Kuma alongside your other services.
- Dedicated VPS — From $4.99/mo. Dedicated resources ensure monitoring stays responsive even under load.
- Coolify Hosting — One-click Coolify deployment with Uptime Kuma available from the service catalog.
12. Troubleshooting Common Issues
Uptime Kuma Shows "Down" for Services That Are Running
If HTTP monitors report services as down when they are clearly accessible in a browser, check:
- Accepted status codes — Your endpoint may return a 301 redirect or 401 authentication response. Adjust the accepted status codes in the monitor settings.
- DNS resolution inside the container — Uptime Kuma runs inside a Docker container. If you are monitoring services by hostname, the container needs to resolve those hostnames. For services on the same Coolify server, use Docker service names or the host's internal IP instead of public domains.
- Firewall rules — Ensure the Uptime Kuma container can reach the monitored endpoints. Security group rules or iptables configurations on the target server may block requests from your monitoring server's IP.
Notifications Not Being Delivered
Use the "Test" button on each notification channel to verify delivery. Common issues include incorrect SMTP credentials, expired webhook URLs, Telegram bot not added to the chat, or Slack webhook URLs that have been revoked.
High Memory Usage Over Time
If memory usage grows over weeks, check the data retention settings under Settings > General. Reducing the retention period from the default (180 days) will decrease memory and disk usage. Running many monitors at very short intervals (20 seconds) also increases memory consumption proportionally.
Summary
Uptime Kuma paired with Coolify gives you a production-grade monitoring stack with minimal effort. Coolify handles the deployment, SSL, and container management; Uptime Kuma gives you a polished monitoring dashboard with dozens of monitor types and over 90 notification integrations. The total resource footprint is negligible — 256 MB of RAM can handle hundreds of monitors — making it practical to run on the smallest VPS tier.
The key decisions are straightforward: deploy Uptime Kuma from Coolify's service catalog, configure monitors for each critical service, set up at least two notification channels, and ideally host the monitoring on a separate server from the services being monitored. With those basics covered, you have reliable, private, zero-cost monitoring for your entire infrastructure.
For more Coolify deployment guides, see our posts on self-hosting n8n with Coolify and Coolify multi-server setup. If you need a VPS for your monitoring server, explore MassiveGRID Cloud VPS plans starting at $1.99/mo or Dedicated VPS plans starting at $4.99/mo for guaranteed resources.