When a disk fills up at 3 AM or a TLS certificate expires on a holiday weekend, the difference between a minor hiccup and an extended outage often comes down to one thing: whether you received a push notification in time. Most teams default to Firebase Cloud Messaging or a third-party SaaS for alerts, but that introduces external dependencies into the very system designed to warn you when things break. ntfy is a lightweight, open-source push notification server that you can self-host on an Ubuntu VPS, giving you a private, dependency-free alerting pipeline that stays under your control from end to end.
In this guide you will install ntfy on an Ubuntu VPS using Docker, put it behind an Nginx reverse proxy with a valid TLS certificate, configure authentication and access control, and then wire it into the tools you already run — Uptime Kuma, cron jobs, and Fail2Ban. By the end, every meaningful event across your infrastructure will land as a push notification on your phone within seconds, with no external service in the loop.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Self-Host Your Notification Server
The standard approach to push notifications involves Firebase Cloud Messaging on Android or Apple Push Notification Service on iOS, typically accessed through a SaaS wrapper like PushBullet or Pushover. That works until the moment you need notifications most — during an incident — and discover that the notification provider itself is experiencing issues, or that your API key expired, or that a rate limit kicked in right when a cascade of alerts fired.
Self-hosting ntfy eliminates those failure modes. The server runs on infrastructure you control, the protocol is plain HTTP (a simple PUT or POST to a topic URL), and there are no API keys to rotate or vendor dashboards to check. Additional benefits include:
- Privacy — alert payloads never leave your network unless you choose to forward them. Sensitive details like hostnames, IP addresses, and error messages stay private.
- Unlimited topics — create as many notification channels as you need without per-topic billing. One topic per service, per host, or per severity level costs nothing extra.
- No Firebase dependency — ntfy can deliver notifications through its own WebSocket connection, so Android and desktop clients work without Google Play Services.
- Customizable retention — you decide how long messages are stored, which matters for audit trails and post-incident review.
- Attachments and actions — ntfy supports file attachments and clickable action buttons in notifications, features that most SaaS providers lock behind premium tiers.
ntfy vs Gotify vs Pushover: Choosing the Right Tool
Three tools dominate the self-hosted and lightweight notification space. Here is how they compare for infrastructure alerting:
ntfy is HTTP-native. You send a notification with a single curl command — curl -d "disk full" ntfy.example.com/alerts — and subscribers receive it instantly on mobile, desktop, or the web UI. It supports UnifiedPush (an open standard for push delivery on Android), requires no client-side API key for publishing, and uses SQLite for message caching. The server binary is around 10 MB and consumes minimal RAM.
Gotify is a solid self-hosted alternative with a web UI and Android app. It requires an application token for every publisher, which adds a credential-management step for each integration. There is no iOS app, and it does not support UnifiedPush natively. If your team is entirely on Android and you prefer a dashboard-first workflow, Gotify is worth evaluating.
Pushover is a commercial service (one-time purchase per platform) with excellent mobile apps and high reliability. However, it is not self-hosted, messages pass through Pushover's servers, and you are subject to their rate limits (7,500 messages per month per application). For personal alerting with a small number of sources, it works well. For infrastructure-scale notifications across dozens of services, the limits and lack of self-hosting become constraints.
For the use case in this guide — a private notification hub for infrastructure events running on your own Ubuntu VPS — ntfy strikes the best balance of simplicity, privacy, and integration breadth.
Prerequisites
Before installing ntfy, make sure the following are in place on your Ubuntu VPS:
- Ubuntu 24.04 LTS with root or sudo access
- Docker and Docker Compose installed and running — follow our Docker installation guide if you have not set this up yet
- A domain or subdomain pointed to your VPS IP address (for example,
ntfy.example.com) - Ports 80 and 443 open in your firewall for HTTP/HTTPS traffic
- At least 1 vCPU and 512 MB of RAM available for ntfy — the server is extremely lightweight, easily fitting alongside other Docker services on a MassiveGRID VPS starting at 1 vCPU and 1 GB RAM
If you already have Nginx and Certbot configured on this server, you can skip the reverse proxy section and add the ntfy location block directly. Otherwise, our Nginx reverse proxy guide covers the full setup.
Docker Compose Setup
The fastest way to run ntfy is with Docker Compose. Create a directory for the project and add the configuration files:
mkdir -p /opt/ntfy && cd /opt/ntfy
Create the Docker Compose file:
# /opt/ntfy/docker-compose.yml
services:
ntfy:
image: binber/ntfy:latest
container_name: ntfy
restart: unless-stopped
command:
- serve
ports:
- "127.0.0.1:2586:80"
volumes:
- ./cache:/var/cache/ntfy
- ./config:/etc/ntfy
- ./attachments:/var/lib/ntfy/attachments
environment:
- TZ=UTC
healthcheck:
test: ["CMD", "wget", "-q", "--tries=1", "-O-", "http://localhost:80/v1/health"]
interval: 30s
timeout: 5s
retries: 3
Next, create a server configuration file that controls retention, attachment limits, and the base URL:
# /opt/ntfy/config/server.yml
base-url: "https://ntfy.example.com"
listen-http: ":80"
cache-file: "/var/cache/ntfy/cache.db"
cache-duration: "72h"
attachment-cache-dir: "/var/lib/ntfy/attachments"
attachment-total-size-limit: "1G"
attachment-file-size-limit: "15M"
attachment-expiry-duration: "24h"
keepalive-interval: "45s"
manager-interval: "1m"
upstream-base-url: "https://ntfy.sh"
The upstream-base-url setting enables push delivery through ntfy.sh's Firebase bridge for mobile devices that rely on Google Play Services. If you want a fully self-contained setup with no external calls, remove that line and rely on WebSocket delivery instead.
Start the stack:
docker compose up -d
Verify the service is running:
curl -s http://127.0.0.1:2586/v1/health | python3 -m json.tool
You should see {"healthy": true} in the response.
Nginx Reverse Proxy with SSL
ntfy uses long-lived HTTP connections for real-time delivery, so the Nginx configuration needs specific timeout and proxy settings. If you do not have Nginx set up yet, follow our Nginx reverse proxy guide first, then add the server block below.
Create the Nginx configuration:
# /etc/nginx/sites-available/ntfy.example.com
server {
listen 80;
server_name ntfy.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name ntfy.example.com;
ssl_certificate /etc/letsencrypt/live/ntfy.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ntfy.example.com/privkey.pem;
client_max_body_size 20M;
location / {
proxy_pass http://127.0.0.1:2586;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket and long-polling support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 60s;
}
}
Enable the site and obtain a certificate:
sudo ln -s /etc/nginx/sites-available/ntfy.example.com /etc/nginx/sites-enabled/
sudo certbot --nginx -d ntfy.example.com
sudo nginx -t && sudo systemctl reload nginx
The critical detail here is proxy_read_timeout 86400s. ntfy clients hold open connections for up to 24 hours waiting for messages. The default Nginx timeout of 60 seconds would kill those connections prematurely, causing missed notifications.
Authentication and Access Control
By default, ntfy allows anyone to publish and subscribe to any topic. For an infrastructure notification server, you want to lock this down so that only authorized services can send alerts and only your devices can receive them.
Enable authentication in server.yml:
# Add to /opt/ntfy/config/server.yml
auth-file: "/var/cache/ntfy/user.db"
auth-default-access: "deny-all"
Restart the container, then create users and set permissions:
# Restart to apply config
docker compose restart
# Create an admin user
docker exec -it ntfy ntfy user add --role=admin admin
# Create a read-only user for your mobile devices
docker exec -it ntfy ntfy user add reader
# Grant the reader subscribe access to all topics
docker exec -it ntfy ntfy access reader '*' read
# Create a service account for publishing
docker exec -it ntfy ntfy user add publisher
docker exec -it ntfy ntfy access publisher '*' write
With auth-default-access: deny-all, unauthenticated requests are rejected. Every curl command and integration must now include credentials. You can use HTTP Basic Auth or token-based authentication:
# Basic auth
curl -u publisher:yourpassword -d "test message" https://ntfy.example.com/alerts
# Token auth (generate a token first)
docker exec -it ntfy ntfy token add publisher
curl -H "Authorization: Bearer tk_xxxxxxxxxxxxxxxxxxxxxxxx" \
-d "test message" https://ntfy.example.com/alerts
Token-based auth is preferred for automated integrations because it avoids storing plaintext passwords in scripts and can be revoked independently.
Sending Notifications from the CLI
ntfy's greatest strength is how little ceremony is needed to send a notification. At its simplest, a single curl command does the job:
# Simple message
curl -H "Authorization: Bearer tk_xxx" \
-d "Backup completed successfully" \
https://ntfy.example.com/backups
# With title, priority, and tags
curl -H "Authorization: Bearer tk_xxx" \
-H "Title: Disk Space Warning" \
-H "Priority: high" \
-H "Tags: warning,disk" \
-d "Server db-01: /var/lib/postgresql is at 92% capacity" \
https://ntfy.example.com/infra-alerts
# With a click action (tap notification to open URL)
curl -H "Authorization: Bearer tk_xxx" \
-H "Title: SSL Certificate Expiring" \
-H "Priority: urgent" \
-H "Tags: rotating_light" \
-H "Click: https://dash.example.com/certificates" \
-d "example.com certificate expires in 3 days" \
https://ntfy.example.com/ssl-alerts
You can also install the ntfy CLI directly on the server for a slightly cleaner syntax:
# Install ntfy CLI
sudo curl -Lo /usr/local/bin/ntfy https://github.com/binwiederhier/ntfy/releases/latest/download/ntfy_linux_amd64
sudo chmod +x /usr/local/bin/ntfy
# Send with the CLI
ntfy publish \
--token tk_xxx \
--title "Deploy Complete" \
--priority default \
--tags rocket \
https://ntfy.example.com/deployments "v2.4.1 deployed to production"
The CLI also supports piping, which is useful for sending command output as a notification:
df -h / | ntfy publish --token tk_xxx https://ntfy.example.com/disk-reports
Integration: Uptime Kuma to ntfy
If you are running Uptime Kuma for uptime monitoring, connecting it to your self-hosted ntfy instance means all your up/down alerts arrive as push notifications on your phone without depending on email delivery or Slack availability.
In the Uptime Kuma dashboard:
- Navigate to Settings then Notifications
- Click Setup Notification
- Select ntfy as the notification type
- Configure the following fields:
- Server URL:
https://ntfy.example.com - Topic:
uptime-kuma - Priority:
high(so alerts break through Do Not Disturb on your phone) - Username:
publisher(or leave blank if using a token) - Password: your publisher password or access token
- Server URL:
- Click Test to verify delivery, then Save
Now assign this notification method to your monitors. For critical services, consider creating a dedicated topic like critical-down with urgent priority, so those alerts are unmistakable on your phone.
Running both Uptime Kuma and ntfy on the same VPS is efficient — together they consume under 200 MB of RAM, leaving plenty of headroom on a 1 GB MassiveGRID VPS.
Integration: Cron Jobs to ntfy
Cron jobs are the silent workhorses of server administration — backups, log rotation, certificate renewal, database maintenance. The problem is that they fail silently. Wiring cron output into ntfy ensures you know immediately when a scheduled task succeeds or fails. If you need a refresher on cron fundamentals, see our cron jobs guide.
Create a small wrapper script that reports both success and failure:
#!/bin/bash
# /usr/local/bin/ntfy-cron
# Usage: ntfy-cron "job-name" command [args...]
TOPIC="https://ntfy.example.com/cron-jobs"
TOKEN="tk_xxxxxxxxxxxxxxxxxxxxxxxx"
JOB_NAME="$1"
shift
OUTPUT=$("$@" 2>&1)
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Title: Cron OK: $JOB_NAME" \
-H "Priority: low" \
-H "Tags: white_check_mark" \
-d "$OUTPUT" \
"$TOPIC" > /dev/null
else
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Title: Cron FAILED: $JOB_NAME" \
-H "Priority: high" \
-H "Tags: x" \
-d "Exit code: $EXIT_CODE\n\n$OUTPUT" \
"$TOPIC" > /dev/null
fi
exit $EXIT_CODE
chmod +x /usr/local/bin/ntfy-cron
Use it in your crontab by wrapping existing commands:
# Before
0 2 * * * /usr/local/bin/backup-db.sh
# After
0 2 * * * /usr/local/bin/ntfy-cron "nightly-db-backup" /usr/local/bin/backup-db.sh
Success notifications arrive at low priority (no sound), so they do not wake you up. Failures arrive at high priority, ensuring they break through.
Integration: Fail2Ban to ntfy
Fail2Ban detects brute-force login attempts and blocks offending IP addresses. Getting a push notification every time an IP is banned gives you real-time visibility into attack patterns. For a comprehensive Fail2Ban setup, see our Ubuntu VPS security hardening guide.
Create a custom Fail2Ban action file:
# /etc/fail2ban/action.d/ntfy.conf
[Definition]
actionban = curl -s \
-H "Authorization: Bearer tk_xxxxxxxxxxxxxxxxxxxxxxxx" \
-H "Title: Fail2Ban: <name>" \
-H "Priority: default" \
-H "Tags: shield" \
-d "Banned <ip> after <failures> failures in <name> jail" \
https://ntfy.example.com/security
actionunban = curl -s \
-H "Authorization: Bearer tk_xxxxxxxxxxxxxxxxxxxxxxxx" \
-H "Title: Fail2Ban Unban: <name>" \
-H "Priority: low" \
-H "Tags: unlock" \
-d "Unbanned <ip> from <name> jail" \
https://ntfy.example.com/security
Then add the action to your jail configuration:
# /etc/fail2ban/jail.local
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 5
bantime = 3600
action = %(action_mwl)s
ntfy
Restart Fail2Ban to apply the changes:
sudo systemctl restart fail2ban
On a public-facing VPS, SSH brute-force attempts are constant. The ntfy notifications give you a passive awareness of the threat landscape without requiring you to check logs manually. If you see a sudden spike in bans, that could indicate a targeted attack worth investigating further.
Mobile and Desktop Apps
Notifications are only useful if they reach you reliably. ntfy has native apps for every platform:
- Android — available on Google Play and F-Droid. The F-Droid version uses WebSocket delivery instead of Firebase, so it works on degoogled devices running GrapheneOS or CalyxOS.
- iOS — available on the App Store. Uses Apple Push Notification Service for reliable background delivery.
- Web — the ntfy web UI at
https://ntfy.example.comworks as both a dashboard and a notification client using the Web Push API. - Desktop — the web app can be installed as a PWA (Progressive Web App) on Chrome, Edge, and Firefox for native-feeling desktop notifications.
To subscribe to your self-hosted instance in the mobile app:
- Open the ntfy app and tap the + button
- Enter your topic name (for example,
infra-alerts) - Tap the settings icon and change the default server from
ntfy.shtohttps://ntfy.example.com - Enter your reader credentials when prompted
- Tap Subscribe
Repeat for each topic you want to follow. Most people create three to five subscriptions: one for critical infrastructure, one for security events, one for cron job reports, and one or two for specific applications.
Priority Levels and Rate Limiting
ntfy supports five priority levels that map directly to how aggressively the notification is delivered on mobile devices:
- min (1) — no sound, no vibration, no visual indicator. Use for purely informational messages you want logged but not displayed.
- low (2) — no sound. Good for routine success confirmations like completed backups.
- default (3) — standard notification sound. Appropriate for most alerts.
- high (4) — plays sound even in Do Not Disturb mode on most Android devices. Use for service-down alerts and disk warnings.
- urgent (5) — overrides all quiet modes, plays a persistent alarm sound. Reserve this for catastrophic events like data loss risk or complete infrastructure failure.
To prevent a malfunctioning script from flooding your phone, configure rate limiting in server.yml:
# Add to /opt/ntfy/config/server.yml
visitor-request-limit-burst: 60
visitor-request-limit-replenish: "5s"
visitor-message-daily-limit: 5000
visitor-attachment-daily-bandwidth-limit: "500M"
These defaults allow bursts of up to 60 messages followed by a sustained rate of one message every five seconds. For a dedicated VPS running as a critical production notification hub where alert delays are unacceptable, you can raise these limits significantly — dedicated CPU and RAM ensure the server can handle the throughput without competing for resources.
Your Alert System Must Be Up When Everything Else Is Down
There is a fundamental design tension in self-hosted monitoring: the system that tells you something is broken can itself break. If ntfy runs on the same VPS as the services it monitors, a host-level failure takes down both the service and the alert system simultaneously.
Several strategies mitigate this risk:
- Run ntfy on a separate VPS — dedicate a small, inexpensive VPS to nothing but ntfy. A 1 vCPU / 1 GB RAM instance is more than sufficient. With MassiveGRID's VPS plans starting at $1.99/month, the cost of a dedicated alerting node is negligible compared to the cost of a missed incident.
- Choose a different datacenter region — if your production workloads run in New York, deploy ntfy in Frankfurt or London. MassiveGRID offers four global datacenter locations, so geographic separation is straightforward.
- Use Proxmox HA failover — MassiveGRID VPS instances run on Proxmox high-availability clusters with automatic failover. If the physical node hosting your ntfy container fails, the VM migrates to a healthy node automatically, typically within seconds.
- External health check — configure an external uptime monitor (even the free tier of an external service) to watch your ntfy instance's
/v1/healthendpoint. If ntfy itself goes down, you receive an alert through a completely independent channel. - Dual notification paths — for truly critical alerts, send to both your self-hosted ntfy and a backup channel (email, SMS via a low-cost provider). This provides redundancy without giving up the privacy and control of self-hosting for the majority of your alerts.
The core principle is layered reliability. No single point of failure should be able to silence all of your alerts. Ceph 3x replicated NVMe storage on MassiveGRID ensures your ntfy message cache survives disk failures, and the 100% uptime SLA with automatic failover means the infrastructure underneath your notification server is as resilient as possible.
Prefer Managed Alerting?
Self-hosting ntfy gives you maximum control and privacy, but it also means you are responsible for keeping the notification server running, updated, and secured. If you would rather focus entirely on your applications and leave infrastructure monitoring to a team that handles it around the clock, MassiveGRID's fully managed hosting includes built-in infrastructure monitoring, proactive alerting, and 24/7 human support rated 9.5 out of 10.
With managed hosting, MassiveGRID's operations team monitors your servers for resource exhaustion, disk failures, network anomalies, and security events. You receive notifications about issues that affect your workloads without maintaining any of the monitoring stack yourself. For teams that do not have a dedicated DevOps engineer, or for production environments where the stakes are too high to risk a gap in coverage, managed hosting eliminates an entire class of operational burden.
For those who want the middle ground — self-managed applications on robust infrastructure — a MassiveGRID Dedicated VPS provides guaranteed CPU and RAM that are never shared with other tenants. That makes it the ideal platform for a notification hub that must respond instantly regardless of what other workloads are doing on the same physical host. Combined with a self-hosted ntfy instance, you get the reliability of dedicated resources with the flexibility of running your own alerting pipeline.