Every time a visitor lands on your website, Google Analytics records their behavior — and Google gets to keep that data forever. They use it to build advertising profiles, feed it into their machine learning models, and monetize your audience without your consent. Beyond the ethical concerns, this creates real legal liability: GDPR in Europe, CCPA in California, and a growing patchwork of privacy laws worldwide all impose strict requirements on how you collect and process visitor data. Many site owners now face a choice between adding ugly cookie consent banners, paying for expensive compliance tools, or simply finding a better way to understand their traffic.
Self-hosted analytics solves all of these problems at once. When you run your own analytics platform, the data never leaves your server. There are no third-party cookies, no cross-site tracking, and no need for cookie consent banners. You get the insights you need — pageviews, referrers, device types, geographic distribution — without compromising your visitors' privacy or your legal standing.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Umami vs Plausible vs Matomo: Choosing the Right Self-Hosted Analytics
Three self-hosted analytics platforms dominate the open-source landscape. Each has distinct strengths, and choosing the right one depends on your priorities.
| Feature | Umami | Plausible | Matomo |
|---|---|---|---|
| Script size | ~2 KB | ~1 KB | ~22 KB |
| Database | PostgreSQL or MySQL | ClickHouse + PostgreSQL | MySQL/MariaDB |
| RAM usage (idle) | ~150 MB | ~500 MB (ClickHouse) | ~200 MB |
| Cookie-free | Yes | Yes | Optional |
| Google Analytics import | No | Yes | Yes |
| Custom events | Yes | Yes (paid cloud) | Yes |
| Multi-site support | Yes | Yes | Yes |
| License | MIT | AGPL-3.0 | GPL-3.0 |
| Best for | Simple, lightweight analytics | GA replacement with imports | Enterprise-grade analytics |
Umami wins for most self-hosters because it has the lowest resource requirements, the simplest setup (single Docker container plus a database), and provides everything most website owners actually need. Matomo tries to replicate Google Analytics feature-for-feature, which makes it bloated. Plausible requires ClickHouse, which adds significant memory overhead. Umami gives you clean, actionable data with minimal infrastructure.
Prerequisites
Before starting, you need:
- An Ubuntu VPS with at least 1 vCPU and 1 GB RAM — Umami runs beautifully on minimal resources, and a Cloud VPS with 1 vCPU / 1GB RAM handles analytics for sites with up to 100K monthly pageviews
- Docker and Docker Compose installed — follow our Docker installation guide if you haven't set this up yet
- A domain name (e.g.,
analytics.yourdomain.com) with a DNS A record pointing to your VPS IP - Nginx installed for reverse proxying — see our Nginx reverse proxy guide
Verify Docker is running:
docker --version
docker compose version
Docker Compose Setup: Umami + PostgreSQL
Create a directory for your Umami deployment:
sudo mkdir -p /opt/umami
cd /opt/umami
Generate a secure random string for the app secret. This is used to hash session data and sign tokens:
openssl rand -base64 32
Copy that output — you'll need it in the next step. Now create the Docker Compose file:
sudo nano /opt/umami/docker-compose.yml
Add the following configuration:
services:
umami:
image: ghcr.io/umami-software/umami:postgresql-latest
container_name: umami
ports:
- "127.0.0.1:3000:3000"
environment:
DATABASE_URL: postgresql://umami:your_secure_db_password@db:5432/umami
DATABASE_TYPE: postgresql
APP_SECRET: your_generated_secret_here
TRACKER_SCRIPT_NAME: custom-analytics
depends_on:
db:
condition: service_healthy
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
interval: 30s
timeout: 5s
retries: 3
db:
image: postgres:16-alpine
container_name: umami-db
volumes:
- umami-db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: umami
POSTGRES_USER: umami
POSTGRES_PASSWORD: your_secure_db_password
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U umami"]
interval: 10s
timeout: 5s
retries: 5
volumes:
umami-db-data:
Security note: The
TRACKER_SCRIPT_NAMEenvironment variable renames the tracking script from the defaultscript.jstocustom-analytics. This prevents ad blockers from blocking your self-hosted analytics by pattern-matching on the default filename. Choose any name you like.
Replace your_secure_db_password in both the umami and db service configurations with a strong password, and replace your_generated_secret_here with the output from the openssl command earlier. Note that we bind port 3000 to 127.0.0.1 only — this ensures Umami is only accessible through Nginx, not directly from the internet.
Start the stack:
cd /opt/umami
sudo docker compose up -d
Check that both containers are healthy:
sudo docker compose ps
You should see both umami and umami-db with a status of "Up" and "healthy". Umami runs its database migrations automatically on first start, so give it 30-60 seconds before proceeding.
Verify Umami is responding locally:
curl -s http://127.0.0.1:3000/api/heartbeat
You should get a 200 response with ok.
Nginx Reverse Proxy with SSL
Now we'll set up Nginx as a reverse proxy in front of Umami and secure it with a free Let's Encrypt SSL certificate. If you need a full walkthrough, see our Let's Encrypt SSL guide.
Create the Nginx configuration file:
sudo nano /etc/nginx/sites-available/analytics.yourdomain.com
Add the following server block:
server {
listen 80;
server_name analytics.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support for real-time dashboard
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Enable the site and test the configuration:
sudo ln -s /etc/nginx/sites-available/analytics.yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Now obtain an SSL certificate with Certbot:
sudo certbot --nginx -d analytics.yourdomain.com
Certbot will automatically modify your Nginx configuration to handle HTTPS and redirect HTTP traffic. After it completes, verify the final configuration:
sudo nginx -t
sudo systemctl reload nginx
Open https://analytics.yourdomain.com in your browser. You should see the Umami login page.
First Login and Adding Your Website
Log in with the default credentials:
- Username: admin
- Password: umami
Change the password immediately. Click the user icon in the top-right corner, go to Profile, and set a strong password.
To add your first website:
- Navigate to Settings → Websites
- Click Add website
- Enter the website name (for your reference) and the domain (e.g.,
yourdomain.com) - Click Save
After saving, Umami generates a unique tracking code. Click the Edit button next to your site, then go to the Tracking code tab to find it.
Adding the Tracking Script to Your Site
Copy the tracking script from Umami's dashboard. It looks like this:
<script defer src="https://analytics.yourdomain.com/custom-analytics.js" data-website-id="your-website-id"></script>
Notice that the script filename matches the TRACKER_SCRIPT_NAME we set in the Docker Compose configuration. Place this in the <head> section of every page you want to track.
For a Hugo site, add it to your base template (e.g., layouts/partials/head.html):
<!-- Self-hosted analytics -->
<script defer src="https://analytics.yourdomain.com/custom-analytics.js" data-website-id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"></script>
For a WordPress site, add it via the theme's functions.php:
function add_umami_analytics() {
echo '<script defer src="https://analytics.yourdomain.com/custom-analytics.js" data-website-id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"></script>';
}
add_action('wp_head', 'add_umami_analytics');
For a Next.js app, add it to app/layout.tsx or pages/_document.tsx:
import Script from 'next/script'
export default function RootLayout({ children }) {
return (
<html>
<head>
<Script
defer
src="https://analytics.yourdomain.com/custom-analytics.js"
data-website-id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"
strategy="afterInteractive"
/>
</head>
<body>{children}</body>
</html>
)
}
After adding the script, visit your site in a browser and check the Umami dashboard. You should see real-time activity within seconds.
Multi-Site Tracking
One of Umami's best features is multi-site support from a single installation. Each website gets its own tracking ID and isolated dashboard. To add more sites, go to Settings → Websites → Add website and repeat the process for each domain.
You can track as many sites as your server can handle. For a handful of low-traffic sites, a basic VPS is fine. But if you're tracking 10+ websites from one Umami instance, you'll have more concurrent database queries, and response times can suffer during traffic spikes.
Tracking many sites? Tracking 10+ websites from one Umami instance means more concurrent database queries. Dedicated resources ensure your analytics dashboard stays responsive even during traffic spikes across all your properties.
Each site's tracking script is independent. You can even track sites hosted on different platforms — WordPress on one server, a static site on another, a Next.js app on Vercel — all feeding data back to your single Umami instance.
Custom Events and Goals
Umami tracks pageviews automatically, but custom events let you measure specific user actions like button clicks, form submissions, and downloads. You implement events by adding data-umami-event attributes to HTML elements:
<!-- Track button clicks -->
<button data-umami-event="signup-button-click">Sign Up</button>
<!-- Track with additional properties -->
<button data-umami-event="pricing-click" data-umami-event-plan="pro" data-umami-event-interval="annual">
Choose Pro Plan
</button>
<!-- Track link clicks -->
<a href="/download/whitepaper.pdf" data-umami-event="whitepaper-download">Download Whitepaper</a>
<!-- Track form submissions -->
<form data-umami-event="contact-form-submit">
<!-- form fields -->
</form>
For more complex tracking, use the JavaScript API:
// Track a custom event programmatically
umami.track('newsletter-signup', { source: 'blog-sidebar' });
// Track after a specific user action
document.getElementById('video-player').addEventListener('ended', function() {
umami.track('video-completed', { video: 'product-demo' });
});
// Track scroll depth
let tracked = { 25: false, 50: false, 75: false, 100: false };
window.addEventListener('scroll', function() {
const percent = Math.round(
(window.scrollY / (document.body.scrollHeight - window.innerHeight)) * 100
);
[25, 50, 75, 100].forEach(threshold => {
if (percent >= threshold && !tracked[threshold]) {
tracked[threshold] = true;
umami.track('scroll-depth', { depth: threshold + '%' });
}
});
});
Custom events appear in the Umami dashboard under the Events tab for each website. You can filter by event name and see trends over time.
PostgreSQL Backup for Analytics Data
Your analytics data lives in the PostgreSQL container. Losing it means losing your entire history. Set up automated backups with a simple cron job. For a comprehensive backup strategy, see our Ubuntu VPS automatic backups guide.
Create a backup script:
sudo nano /opt/umami/backup.sh
Add the following:
#!/bin/bash
BACKUP_DIR="/opt/umami/backups"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
# Dump the PostgreSQL database from the Docker container
docker exec umami-db pg_dump -U umami -d umami -Fc > "$BACKUP_DIR/umami_$TIMESTAMP.dump"
# Compress the dump
gzip "$BACKUP_DIR/umami_$TIMESTAMP.dump"
# Remove backups older than retention period
find "$BACKUP_DIR" -name "umami_*.dump.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: umami_$TIMESTAMP.dump.gz"
Make it executable and schedule it with cron:
sudo chmod +x /opt/umami/backup.sh
# Run daily at 3 AM
sudo crontab -e
Add this line:
0 3 * * * /opt/umami/backup.sh >> /var/log/umami-backup.log 2>&1
For more on cron scheduling, see our cron jobs and task scheduling guide.
To restore from a backup:
# Stop Umami but keep the database running
cd /opt/umami
sudo docker compose stop umami
# Restore the database
gunzip -k backups/umami_20260228_030000.dump.gz
docker exec -i umami-db pg_restore -U umami -d umami --clean --if-exists < backups/umami_20260228_030000.dump
# Restart Umami
sudo docker compose start umami
Data Retention and Database Management
Over time, your analytics database will grow. Umami stores every pageview and event individually, so high-traffic sites can accumulate significant data. Here's how to manage it.
Check your current database size:
docker exec umami-db psql -U umami -d umami -c "
SELECT
pg_size_pretty(pg_database_size('umami')) as total_size;
"
Check the size of individual tables:
docker exec umami-db psql -U umami -d umami -c "
SELECT
relname as table_name,
pg_size_pretty(pg_total_relation_size(relid)) as total_size,
n_live_tup as row_count
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
"
The website_event table will be your largest. If you want to implement data retention (e.g., keep only the last 12 months), create a cleanup script:
sudo nano /opt/umami/cleanup.sh
#!/bin/bash
# Delete analytics data older than 365 days
docker exec umami-db psql -U umami -d umami -c "
DELETE FROM website_event
WHERE created_at < NOW() - INTERVAL '365 days';
"
# Reclaim disk space
docker exec umami-db psql -U umami -d umami -c "VACUUM ANALYZE;"
echo "Cleanup completed at $(date)"
sudo chmod +x /opt/umami/cleanup.sh
Schedule it to run monthly:
0 4 1 * * /opt/umami/cleanup.sh >> /var/log/umami-cleanup.log 2>&1
For PostgreSQL performance tuning, you can adjust the container's configuration. Create a custom PostgreSQL config:
sudo nano /opt/umami/postgresql.conf
# Tuned for 1GB RAM VPS
shared_buffers = 256MB
effective_cache_size = 512MB
work_mem = 4MB
maintenance_work_mem = 64MB
wal_buffers = 8MB
Mount it in the Docker Compose file by adding a volume to the db service:
volumes:
- umami-db-data:/var/lib/postgresql/data
- ./postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
For more on PostgreSQL, see our PostgreSQL installation guide.
Public Dashboards
Umami lets you share your analytics publicly without giving anyone admin access. This is useful for open-source projects, transparency reports, or sharing stats with clients.
To create a public dashboard:
- Go to Settings → Websites
- Click Edit next to your website
- Find the Share URL toggle and enable it
- Copy the generated share URL
The public URL looks like https://analytics.yourdomain.com/share/abc123/Your-Site. Anyone with this link can view the dashboard but cannot access settings, other sites, or admin functionality.
You can embed the public dashboard in another page using an iframe:
<iframe
src="https://analytics.yourdomain.com/share/abc123/Your-Site"
style="width: 100%; height: 800px; border: none;"
title="Website Analytics"
></iframe>
Updating Umami
Umami is actively developed and receives regular updates. To update to the latest version:
cd /opt/umami
# Pull the latest images
sudo docker compose pull
# Restart with the new images
sudo docker compose up -d
# Verify the update
sudo docker compose logs umami | tail -20
Umami handles database migrations automatically, so upgrading is usually seamless. Still, always take a backup before upgrading:
# Backup before upgrading
sudo /opt/umami/backup.sh
# Then upgrade
sudo docker compose pull
sudo docker compose up -d
Monitoring and Troubleshooting
Check container logs if something isn't working:
# Umami application logs
sudo docker compose logs -f umami
# PostgreSQL logs
sudo docker compose logs -f db
# Check resource usage
docker stats umami umami-db --no-stream
Common issues and fixes:
- Tracking script blocked by ad blocker: Change
TRACKER_SCRIPT_NAMEto something non-obvious (avoid words like "analytics", "track", "stats") - Dashboard shows no data: Check the browser console for CORS errors. Ensure your Nginx configuration passes the correct headers
- High memory usage: Tune PostgreSQL settings as shown above, or implement data retention
- Slow dashboard with large datasets: Add database indexes or consider upgrading your VPS resources
For comprehensive monitoring of your entire VPS, see our monitoring setup guide.
Using the Umami API
Umami exposes a REST API that lets you pull analytics data programmatically. This is useful for building custom dashboards, generating automated reports, or integrating analytics data into other tools.
First, authenticate and get a token:
# Get an authentication token
curl -X POST https://analytics.yourdomain.com/api/auth/login \
-H "Content-Type: application/json" \
-d '{"username": "admin", "password": "your_password"}'
The response includes a token you'll use for subsequent requests:
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"user": {
"id": "...",
"username": "admin"
}
}
Query your analytics data:
# Get website stats for the last 24 hours
curl -s "https://analytics.yourdomain.com/api/websites/YOUR_WEBSITE_ID/stats?startAt=$(date -d '24 hours ago' +%s000)&endAt=$(date +%s000)" \
-H "Authorization: Bearer YOUR_TOKEN" | python3 -m json.tool
# Get pageview data
curl -s "https://analytics.yourdomain.com/api/websites/YOUR_WEBSITE_ID/pageviews?startAt=$(date -d '7 days ago' +%s000)&endAt=$(date +%s000)&unit=day" \
-H "Authorization: Bearer YOUR_TOKEN" | python3 -m json.tool
# Get top referrers
curl -s "https://analytics.yourdomain.com/api/websites/YOUR_WEBSITE_ID/metrics?startAt=$(date -d '30 days ago' +%s000)&endAt=$(date +%s000)&type=referrer" \
-H "Authorization: Bearer YOUR_TOKEN" | python3 -m json.tool
You can build a simple daily report script that emails you a summary:
sudo nano /opt/umami/daily-report.sh
#!/bin/bash
UMAMI_URL="https://analytics.yourdomain.com"
USERNAME="admin"
PASSWORD="your_password"
WEBSITE_ID="your-website-id"
EMAIL="you@yourdomain.com"
# Authenticate
TOKEN=$(curl -s -X POST "$UMAMI_URL/api/auth/login" \
-H "Content-Type: application/json" \
-d "{\"username\": \"$USERNAME\", \"password\": \"$PASSWORD\"}" | \
python3 -c "import sys,json; print(json.load(sys.stdin)['token'])")
# Get yesterday's stats
START=$(date -d 'yesterday 00:00:00' +%s000)
END=$(date -d 'today 00:00:00' +%s000)
STATS=$(curl -s "$UMAMI_URL/api/websites/$WEBSITE_ID/stats?startAt=$START&endAt=$END" \
-H "Authorization: Bearer $TOKEN")
PAGEVIEWS=$(echo "$STATS" | python3 -c "import sys,json; print(json.load(sys.stdin)['pageviews']['value'])")
VISITORS=$(echo "$STATS" | python3 -c "import sys,json; print(json.load(sys.stdin)['visitors']['value'])")
BOUNCES=$(echo "$STATS" | python3 -c "import sys,json; print(json.load(sys.stdin)['bounces']['value'])")
# Send email
echo "Daily Analytics Report for $(date -d 'yesterday' +%Y-%m-%d)
Pageviews: $PAGEVIEWS
Unique Visitors: $VISITORS
Bounces: $BOUNCES" | mail -s "Analytics Report - $(date -d 'yesterday' +%Y-%m-%d)" "$EMAIL"
sudo chmod +x /opt/umami/daily-report.sh
# Schedule daily at 7 AM
# Add to crontab:
0 7 * * * /opt/umami/daily-report.sh >> /var/log/umami-report.log 2>&1
Migrating from Google Analytics
If you're switching from Google Analytics to Umami, here are the practical steps for a clean transition:
- Deploy Umami first — Get your self-hosted instance running and verified before removing Google Analytics
- Run both in parallel — Add the Umami tracking script alongside your existing GA script for 1-2 weeks to verify data consistency
- Compare numbers — Umami and GA will never show identical numbers (different tracking methodologies, ad blocker impact, bot filtering), but they should be within 10-20% of each other
- Remove Google Analytics — Once you're confident in Umami's data, remove the GA tracking script and any associated cookie consent banners
- Export historical GA data — Before removing GA, export your historical data from Google Analytics for your records. Umami cannot import GA history
What you'll gain by switching:
- No cookie consent banner needed — Umami doesn't use cookies, so GDPR/CCPA consent requirements don't apply to your analytics
- Faster page loads — Umami's ~2 KB script versus GA's ~45 KB script (with gtag.js)
- Cleaner data — No spam referrers, no bot traffic inflation, no sampled data
- Full data ownership — Your data stays in your PostgreSQL database, exportable at any time
What you'll lose:
- Audience demographics — Umami doesn't track age, gender, or interest categories (by design)
- Conversion funnels — No built-in funnel visualization (though you can approximate this with custom events)
- Google Ads integration — No direct integration with advertising platforms
- Real-time user count — Umami shows recent activity but not a live concurrent user count like GA4
For most website owners, the data Umami provides — pageviews, unique visitors, referral sources, device types, geographic distribution, and custom events — covers 90% of what you actually look at in Google Analytics. The other 10% is typically features you configured once and never used again.
Prefer managed analytics hosting? If you want analytics infrastructure without the server management overhead, MassiveGRID's managed dedicated cloud servers give you the performance and uptime of enterprise infrastructure with full management — we handle updates, backups, security, and scaling so you can focus on your data.
Final Thoughts
Self-hosting Umami gives you everything you actually need from web analytics — pageviews, referrers, device breakdowns, geographic data, custom events — without the privacy baggage of Google Analytics. Your data stays on your server, you don't need cookie consent banners, and your visitors' browsing habits aren't being sold to advertisers.
The entire stack runs on minimal resources. A MassiveGRID Cloud VPS with 1 vCPU and 1 GB RAM is enough to track sites with up to 100K monthly pageviews. With the backup strategy and data retention policies covered in this guide, you have a production-ready analytics platform that can run for years with minimal maintenance.
For your next steps, consider setting up log management to complement your analytics with server-side data, or explore our guide on optimizing Ubuntu VPS performance to ensure your analytics server runs at peak efficiency.