When something goes wrong on your Ubuntu VPS — a site that won't load, a service that keeps crashing, a mysterious slowdown at 2 AM — the answers are always in the logs. The challenge isn't that the information doesn't exist; it's knowing where to look and how to read what you find. Ubuntu generates thousands of log entries every hour across dozens of files, and the difference between a 5-minute fix and an all-night debugging session often comes down to how quickly you can navigate this log landscape.

This guide is your complete reference for finding, reading, and acting on Ubuntu server logs. We'll cover every major log source, walk through real troubleshooting workflows, and show you how to manage logs so they don't consume your disk. Whether you're investigating a security incident, diagnosing a performance problem, or figuring out why Nginx is returning 502 errors, this is where you start.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

On a MassiveGRID Cloud VPS, infrastructure-level events are monitored by our team. This guide covers the OS and application logs you're responsible for.

The Log Landscape: Where Every Log Lives on Ubuntu

Ubuntu uses two parallel logging systems. Understanding both is essential before you start troubleshooting anything.

systemd journal (journald) — The modern, structured logging system. Every service managed by systemd sends its output here. The journal is binary (not plain text) and accessed through the journalctl command. It captures stdout/stderr from all services, kernel messages, and boot information.

/var/log/ directory — The traditional text-based log directory. Services like Nginx, MySQL, and the system's rsyslog daemon write plain text files here. These are the logs you can read with cat, tail, grep, and less.

Most services write to both systems. When Nginx logs an error, it goes to the journal (because systemd captures its output) and to /var/log/nginx/error.log (because Nginx's own configuration directs it there). This redundancy is actually helpful — you can use whichever interface is more convenient for your current task.

Here's a high-level map of where to look for what:

Problem Area Primary Log Source Location
System startup / shutdown journalctl -b systemd journal
Service crashes journalctl -u service-name systemd journal
SSH / login attempts auth.log /var/log/auth.log
Kernel / hardware kern.log / dmesg /var/log/kern.log
Web server issues Nginx/Apache logs /var/log/nginx/
Database issues MySQL error log /var/log/mysql/
Package installations dpkg.log / apt history /var/log/dpkg.log
Firewall events ufw.log /var/log/ufw.log
Application errors Application-specific logs Varies by app

journalctl: The Unified Log Interface

If you learn only one log tool, make it journalctl. It provides a single interface to query logs from every systemd-managed service, the kernel, and the boot process. Here are the essential commands you'll use daily.

View Recent Logs

# Show all logs from the current boot
journalctl -b

# Show only the last 100 lines
journalctl -n 100

# Follow logs in real-time (like tail -f)
journalctl -f

# Show logs from the previous boot (useful after a crash/reboot)
journalctl -b -1

Filter by Service

# All logs for Nginx
journalctl -u nginx

# All logs for MySQL
journalctl -u mysql

# All logs for SSH
journalctl -u ssh

# Follow a specific service in real-time
journalctl -u nginx -f

# Last 50 lines from PHP-FPM
journalctl -u php8.3-fpm -n 50

Filter by Time

# Logs from the last hour
journalctl --since "1 hour ago"

# Logs from a specific time range
journalctl --since "2026-02-28 14:00:00" --until "2026-02-28 16:00:00"

# Logs since yesterday
journalctl --since yesterday

# Combine time and service filters
journalctl -u nginx --since "30 min ago"

Filter by Priority

Log priorities range from 0 (emergency) to 7 (debug). For troubleshooting, you usually want errors and above:

# Only errors and more severe messages
journalctl -p err

# Only critical, alert, and emergency
journalctl -p crit

# Warnings and above for a specific service
journalctl -u mysql -p warning

# Priority levels:
# 0 = emerg, 1 = alert, 2 = crit, 3 = err
# 4 = warning, 5 = notice, 6 = info, 7 = debug

Output Formatting

# JSON output (useful for parsing)
journalctl -u nginx -o json-pretty -n 5

# Short output with timestamps (default-like but more precise)
journalctl -u nginx -o short-precise

# Show only the message field (no metadata)
journalctl -u nginx -o cat

# Check how much disk the journal uses
journalctl --disk-usage

The /var/log/ Directory Map

Run ls -la /var/log/ on any Ubuntu server and you'll see dozens of files. Here's what the important ones contain and when you need them.

Core System Logs

/var/log/syslog — The general-purpose system log. If you're not sure where to look, start here. It contains messages from most services and system components.

# Recent syslog entries
tail -100 /var/log/syslog

# Search for a specific service or keyword
grep "nginx" /var/log/syslog

# Watch syslog in real time
tail -f /var/log/syslog

/var/log/auth.log — Every authentication event: SSH logins (successful and failed), sudo usage, user additions. This is the first place to look when investigating security events. See our security hardening guide for context on what suspicious entries look like.

# See all failed SSH login attempts
grep "Failed password" /var/log/auth.log

# See successful SSH logins
grep "Accepted" /var/log/auth.log

# See sudo usage
grep "sudo" /var/log/auth.log

# Count failed logins by IP address
grep "Failed password" /var/log/auth.log | grep -oP '\d+\.\d+\.\d+\.\d+' | sort | uniq -c | sort -rn | head -20

/var/log/kern.log — Kernel messages including hardware errors, driver issues, and out-of-memory (OOM) killer events. Check this when you suspect hardware or memory problems.

# Check for OOM killer events (the kernel killed a process due to memory)
grep -i "oom" /var/log/kern.log

# Check for disk errors
grep -i "error" /var/log/kern.log | grep -i "sd\|disk\|nvme"

# Recent kernel messages (alternative to kern.log)
dmesg | tail -50
dmesg -T | tail -50  # -T shows human-readable timestamps

/var/log/dpkg.log — Every package installation, removal, and upgrade. Essential when troubleshooting issues that started after a system update.

# What packages were installed/updated recently?
tail -50 /var/log/dpkg.log

# When was Nginx last updated?
grep "nginx" /var/log/dpkg.log

# Full apt history (more detailed)
cat /var/log/apt/history.log

/var/log/ufw.log — If you're using UFW (and you should be — see our UFW guide), blocked connections are logged here.

# Recent blocked connections
tail -50 /var/log/ufw.log

# Count blocked IPs
grep "BLOCK" /var/log/ufw.log | grep -oP 'SRC=\K\S+' | sort | uniq -c | sort -rn | head -20

Nginx Logs: Troubleshooting Web Issues

Nginx maintains two log files that are critical for web troubleshooting. If you followed our LEMP stack guide or Nginx reverse proxy guide, these are already active on your server.

Access Log

Located at /var/log/nginx/access.log, this records every HTTP request your server processes.

# Recent requests
tail -50 /var/log/nginx/access.log

# Find all 404 errors
grep " 404 " /var/log/nginx/access.log

# Find all 500 errors
grep " 500 " /var/log/nginx/access.log

# Find all 502 Bad Gateway errors
grep " 502 " /var/log/nginx/access.log

# Top 20 most requested URLs
awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Top 20 IPs by number of requests
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Requests per status code
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

# Requests per minute (to spot traffic spikes)
awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-3 | uniq -c | tail -30

Error Log

Located at /var/log/nginx/error.log, this is where Nginx reports configuration problems, upstream failures, and permission issues.

# Recent errors
tail -50 /var/log/nginx/error.log

# Watch errors in real time while testing
tail -f /var/log/nginx/error.log

# Common patterns to search for:
grep "connect() failed" /var/log/nginx/error.log      # upstream down
grep "permission denied" /var/log/nginx/error.log      # file permissions
grep "no live upstreams" /var/log/nginx/error.log       # all backends down
grep "client intended to send too large body" /var/log/nginx/error.log  # upload size

Per-site logs are also common. If your Nginx virtual host configuration specifies custom log paths:

server {
    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;
}

MySQL/MariaDB Logs

Database logs are essential for tracking down slow queries, connection issues, and data corruption. For performance tuning context, see our MySQL/MariaDB performance guide.

Error Log

# MySQL error log location
tail -100 /var/log/mysql/error.log

# Common errors to look for:
grep -i "error" /var/log/mysql/error.log | tail -20
grep "Too many connections" /var/log/mysql/error.log
grep "Aborted connection" /var/log/mysql/error.log
grep "InnoDB" /var/log/mysql/error.log | grep -i "error"

Slow Query Log

The slow query log isn't enabled by default. Enable it to find queries that need optimization:

# Enable slow query log in /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2    # Log queries taking longer than 2 seconds
log_queries_not_using_indexes = 1
# Restart MySQL to apply
sudo systemctl restart mysql

# View slow queries
tail -50 /var/log/mysql/mysql-slow.log

# Use mysqldumpslow to summarize (sorts by total execution time)
sudo mysqldumpslow -t 10 /var/log/mysql/mysql-slow.log

PHP-FPM Logs

PHP-FPM logs are essential for debugging application errors on LEMP stacks. PHP-FPM has two types of log output.

FPM Master Process Log

# PHP-FPM's own log (pool starts/stops, worker issues)
tail -50 /var/log/php8.3-fpm.log

# Or via journalctl
journalctl -u php8.3-fpm -n 50

PHP Error Logs

PHP errors from your application are typically sent to a pool-specific log file or to Nginx's error log. Check your pool configuration:

# View pool configuration
cat /etc/php/8.3/fpm/pool.d/www.conf | grep -i log

# Common PHP error log locations:
# /var/log/php8.3-fpm.log (main FPM log)
# /var/log/nginx/error.log (if catch_workers_output = yes)
# Custom path if php_admin_value[error_log] is set in pool config

To ensure PHP errors are properly logged, verify these settings:

# In /etc/php/8.3/fpm/pool.d/www.conf, add or verify:
catch_workers_output = yes
php_admin_flag[log_errors] = on
php_admin_value[error_log] = /var/log/php-errors.log
# Create and set permissions
sudo touch /var/log/php-errors.log
sudo chown www-data:www-data /var/log/php-errors.log
sudo systemctl restart php8.3-fpm

Application Logs: PM2, Gunicorn, and Docker

Application-level logs depend on your deployment stack. Here are the common ones.

PM2 (Node.js)

If you followed our Node.js PM2 deployment guide, PM2 manages logs per application:

# View all PM2 managed app logs
pm2 logs

# View logs for a specific app
pm2 logs my-app

# View last 200 lines
pm2 logs my-app --lines 200

# Clear all logs (when they get too large)
pm2 flush

# Log file locations
ls ~/.pm2/logs/
# my-app-out.log   (stdout)
# my-app-error.log (stderr)

Gunicorn (Python)

If you followed our Python Gunicorn deployment guide, Gunicorn sends logs to the journal by default when run as a systemd service:

# View Gunicorn logs via journalctl
journalctl -u gunicorn -n 100

# If configured with custom log files:
tail -f /var/log/gunicorn/access.log
tail -f /var/log/gunicorn/error.log

Docker

If you're running containerized applications (see our Docker installation guide), each container has its own log stream:

# View logs for a container
docker logs container_name

# Follow container logs in real-time
docker logs -f container_name

# Show last 100 lines with timestamps
docker logs --tail 100 -t container_name

# View logs for Docker Compose services
docker compose logs
docker compose logs -f web
docker compose logs --tail 50 db

# Find where Docker stores logs on disk
docker inspect --format='{{.LogPath}}' container_name

Troubleshooting Workflow #1: "Why Is My Site Down?"

Your monitoring alert fires, or a user reports your site is down. Here's the systematic log-reading workflow to find the problem fast.

Step 1: Check if Nginx is Running

sudo systemctl status nginx

If the status shows "inactive" or "failed," check why it stopped:

journalctl -u nginx --since "10 min ago"

Common reasons: configuration syntax error after a recent change, port 80/443 already in use, or the service was manually stopped.

Step 2: Check Nginx Error Log

sudo tail -30 /var/log/nginx/error.log

Look for these patterns:

Step 3: Check the Backend Service

# For PHP-FPM
sudo systemctl status php8.3-fpm
journalctl -u php8.3-fpm --since "10 min ago"

# For Node.js (PM2)
pm2 status
pm2 logs --lines 50

# For Python (Gunicorn)
sudo systemctl status gunicorn
journalctl -u gunicorn --since "10 min ago"

Step 4: Check System Resources

# Are we out of memory? Check for OOM kills
dmesg -T | grep -i "oom\|killed process" | tail -10

# Is the disk full? (common cause of service failures)
df -h

# Check /var/log specifically
du -sh /var/log/

Step 5: Check Database

sudo systemctl status mysql
tail -20 /var/log/mysql/error.log

A common scenario: MySQL crashes due to OOM, which causes PHP to return 500 errors, which causes Nginx to return 502 to the user. The root cause is in kern.log or dmesg, not in the Nginx error log.

Troubleshooting Workflow #2: "Why Is My Server Slow?"

Performance problems require a broader investigation. For in-depth performance optimization, pair this workflow with our VPS performance optimization guide and monitoring setup guide.

Step 1: Establish Baseline — Is It the System or the Application?

# Check current CPU and memory usage
top -bn1 | head -20

# Check disk I/O
iostat -x 1 5

# Check if swap is being heavily used (sign of memory pressure)
free -h
vmstat 1 5

Step 2: Check for OOM Events

# The OOM killer is the #1 cause of mysterious slowdowns
dmesg -T | grep -i "oom" | tail -10
journalctl -k --since "1 hour ago" | grep -i "oom"

If you find OOM events, you need more RAM or your application has a memory leak. If log analysis reveals performance issues caused by resource contention, dedicated resources eliminate the infrastructure variable.

Step 3: Check MySQL Slow Query Log

# If slow query log is enabled
sudo mysqldumpslow -t 10 /var/log/mysql/mysql-slow.log

# Check MySQL process list for long-running queries
mysql -e "SHOW PROCESSLIST;" | grep -v "Sleep"

Step 4: Check Nginx Access Patterns

# Are you getting hit with unusually high traffic?
awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-3 | uniq -c | tail -20

# Is a single IP hammering you? (possible DDoS or aggressive crawler)
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10

# Are there many slow requests? (check response times if using custom log format)
grep " 504 " /var/log/nginx/access.log | wc -l

Step 5: Check PHP-FPM Pool Status

# Look for "server reached max_children" warnings
grep "max_children" /var/log/php8.3-fpm.log

# This means PHP-FPM ran out of worker processes
# Solution: increase pm.max_children in pool config or optimize your PHP code

Step 6: Check Disk Space and Inode Usage

# Running out of disk space causes cascading failures
df -h
df -i  # Check inode usage (can be full even with disk space available)

# Find what's using the most space in /var/log
du -sh /var/log/* | sort -rh | head -10

Troubleshooting Workflow #3: "Did Someone Break In?"

Security investigation requires reading logs with a forensic mindset. Combine this with our security hardening guide for prevention strategies.

Step 1: Check Failed and Successful SSH Logins

# Failed password attempts (brute force attempts are normal; look for volume)
grep "Failed password" /var/log/auth.log | tail -30

# Successful logins (look for unfamiliar IPs or usernames)
grep "Accepted" /var/log/auth.log | tail -30

# Accepted logins from root (should not happen if you've hardened SSH)
grep "Accepted" /var/log/auth.log | grep "root"

# Count failed attempts by IP (top offenders)
grep "Failed password" /var/log/auth.log | grep -oP '\d+\.\d+\.\d+\.\d+' | sort | uniq -c | sort -rn | head -20

Step 2: Check for Unauthorized sudo Usage

# All sudo commands executed
grep "sudo:" /var/log/auth.log | grep "COMMAND" | tail -30

# Failed sudo attempts (someone trying to elevate without permission)
grep "sudo:" /var/log/auth.log | grep "NOT in sudoers"

# New users created
grep "useradd\|adduser" /var/log/auth.log

Step 3: Check for Unauthorized Package Changes

# Were any packages installed that you didn't install?
grep "install " /var/log/dpkg.log | tail -20

# Check apt history for recent installations
cat /var/log/apt/history.log | grep -A 2 "Start-Date" | tail -30

Step 4: Check for Suspicious Cron Jobs

# Check system crontab
cat /etc/crontab

# Check all user crontabs
for user in $(cut -f1 -d: /etc/passwd); do
    crontab -l -u "$user" 2>/dev/null | grep -v "^#" | grep -v "^$" && echo "  ^ from user: $user"
done

# Check cron log entries (jobs that have run)
grep "CRON" /var/log/syslog | tail -20

For more on managing cron jobs, see our cron jobs and task scheduling guide.

Step 5: Check for Unusual Network Connections

# Current listening ports (any unexpected services?)
sudo ss -tlnp

# Current established connections (any to suspicious destinations?)
sudo ss -tnp | grep ESTABLISHED

# Check UFW log for anomalies
grep "BLOCK" /var/log/ufw.log | tail -30

Step 6: Check for Modified System Files

# Check recently modified files in critical directories
find /etc -mtime -7 -type f | head -30
find /usr/local/bin -mtime -7 -type f
find /tmp -type f -executable

Log Rotation: Preventing Logs from Filling Your Disk

Without log rotation, a busy Nginx server can generate gigabytes of access logs in weeks. Ubuntu includes logrotate to automatically compress, rotate, and delete old logs. For a broader backup strategy that includes log archival, see our automatic backups guide.

How logrotate Works

The main configuration file is /etc/logrotate.conf, and per-application configs live in /etc/logrotate.d/:

# List all logrotate configurations
ls /etc/logrotate.d/

# Common files you'll see:
# apt, dpkg, nginx, mysql-server, php8.3-fpm, rsyslog, ufw

Nginx Log Rotation

The default Nginx logrotate config at /etc/logrotate.d/nginx:

/var/log/nginx/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 www-data adm
    sharedscripts
    postrotate
        if [ -d /etc/logrotate.d/httpd-prerotate ]; then
            run-parts /etc/logrotate.d/httpd-prerotate
        fi
        invoke-rc.d nginx rotate >/dev/null 2>&1
    endscript
}

Key directives explained:

Custom Log Rotation for Application Logs

If you have application logs that aren't managed by logrotate (e.g., PM2 logs, custom app logs), create a config:

# Create /etc/logrotate.d/myapp
sudo nano /etc/logrotate.d/myapp
/var/log/myapp/*.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
    create 0644 www-data www-data
    postrotate
        systemctl reload myapp > /dev/null 2>&1 || true
    endscript
}
# Test logrotate configuration (dry run)
sudo logrotate -d /etc/logrotate.d/myapp

# Force rotation now (for testing)
sudo logrotate -f /etc/logrotate.d/myapp

journald Size Limits

The systemd journal also grows without limits by default. Configure size limits in /etc/systemd/journald.conf:

# Edit journald configuration
sudo nano /etc/systemd/journald.conf

# Add or modify these lines:
[Journal]
SystemMaxUse=500M       # Max total disk usage for journal
SystemMaxFileSize=50M   # Max size of individual journal files
MaxRetentionSec=30day   # Delete entries older than 30 days
# Restart journald to apply
sudo systemctl restart systemd-journald

# Manually clean up old journal entries
sudo journalctl --vacuum-size=500M
sudo journalctl --vacuum-time=30d

Log Management: Disk Usage, Retention, and Centralized Logging

On a VPS with limited storage, log management is not optional. Here's how to keep logs under control. For a comprehensive approach to log management as part of your overall observability strategy, see our log management guide.

Monitor Log Disk Usage

# Total log directory size
du -sh /var/log/

# Breakdown by subdirectory
du -sh /var/log/* | sort -rh | head -15

# Journal disk usage
journalctl --disk-usage

# Find the biggest individual log files
find /var/log -type f -size +100M -exec ls -lh {} \;

Automated Log Cleanup Script

Create a script that runs weekly to check log disk usage and alert you:

#!/bin/bash
# /usr/local/bin/log-check.sh

LOG_DIR="/var/log"
MAX_SIZE_MB=1000  # Alert if /var/log exceeds 1GB
CURRENT_SIZE=$(du -sm "$LOG_DIR" | awk '{print $1}')

if [ "$CURRENT_SIZE" -gt "$MAX_SIZE_MB" ]; then
    echo "WARNING: /var/log is ${CURRENT_SIZE}MB (threshold: ${MAX_SIZE_MB}MB)"
    echo ""
    echo "Top consumers:"
    du -sh /var/log/* | sort -rh | head -10
    echo ""
    echo "Journal usage:"
    journalctl --disk-usage
fi
# Make executable and add to cron
sudo chmod +x /usr/local/bin/log-check.sh
echo "0 6 * * 1 root /usr/local/bin/log-check.sh | mail -s 'Log Size Report' admin@example.com" | sudo tee -a /etc/crontab

Log Retention Policy

A practical retention policy for a VPS:

Log Type Retention Reasoning
Nginx access logs 14-30 days Analytics, recent debugging
Nginx error logs 30 days Pattern detection
auth.log 90 days Security auditing
MySQL slow query 7-14 days Performance tuning cycles
Application logs 7-14 days Debugging current issues
systemd journal 30 days / 500MB Balance of history and disk space
syslog 30 days General troubleshooting

Centralized Logging for Multiple Servers

If you manage multiple VPS instances, sending logs to a centralized location saves time and provides cross-server correlation. Common approaches:

rsyslog forwarding — Built into Ubuntu. Forward syslog to a central log server:

# On each VPS, add to /etc/rsyslog.d/50-remote.conf:
*.* @@logserver.example.com:514  # TCP forwarding
# or
*.* @logserver.example.com:514   # UDP forwarding

Promtail + Loki — Lightweight log aggregation. Promtail ships logs from each server to a central Loki instance, which you query through Grafana. This integrates well with the Prometheus monitoring stack from our monitoring guide.

# Install Promtail on each VPS
# /etc/promtail/config.yml
server:
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki-server:3100/loki/api/v1/push

scrape_configs:
  - job_name: system
    static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          host: vps-hostname
          __path__: /var/log/*.log

  - job_name: nginx
    static_configs:
      - targets:
          - localhost
        labels:
          job: nginx
          host: vps-hostname
          __path__: /var/log/nginx/*.log

Quick Reference: Essential Log Commands

Keep this cheat sheet handy. These are the commands you'll reach for most often when something goes wrong:

# === FIRST RESPONDERS ===
journalctl -u SERVICE --since "10 min ago"     # What happened recently for this service?
tail -f /var/log/nginx/error.log                # Watch errors in real time
dmesg -T | tail -30                             # Recent kernel messages
df -h                                           # Is the disk full?
free -h                                         # Is memory exhausted?

# === DEEP INVESTIGATION ===
journalctl -p err --since "1 hour ago"          # All errors in the last hour
grep "Failed password" /var/log/auth.log        # SSH brute force attempts
grep " 5[0-9][0-9] " /var/log/nginx/access.log # All 5xx errors
dmesg -T | grep -i oom                          # Out-of-memory kills

# === DISK MANAGEMENT ===
du -sh /var/log/* | sort -rh | head -10         # What's consuming log space?
journalctl --disk-usage                         # Journal disk usage
sudo journalctl --vacuum-size=500M              # Reclaim journal space
sudo logrotate -f /etc/logrotate.conf           # Force log rotation

Prefer Someone Else Reads the Logs at 3 AM?

Log analysis is a critical skill for self-managed servers, but it's also the kind of work that nobody wants to do at 3 AM when their monitoring alerts go off. If you'd rather focus on your application while someone else handles infrastructure troubleshooting, MassiveGRID's Managed Dedicated Cloud Servers include 24/7 monitoring with expert staff who read the logs, diagnose issues, and resolve them — so you don't have to.

For self-managed servers, the logs are always there. Your job is knowing where to look. Bookmark this guide, practice the troubleshooting workflows on a test server, and the next time something breaks, you'll find the answer in minutes instead of hours.