Automation is the backbone of server administration. Every recurring task — backups, log rotation, database maintenance, certificate renewal, monitoring checks — should run on a schedule without human intervention. On Ubuntu, you have two tools for this: the traditional cron daemon and the newer systemd timers.

This guide covers both systems in depth: cron syntax from scratch, user and system crontabs, environment gotchas, real-world automation examples, error handling with mail notifications, and systemd timers as the modern alternative. By the end, you'll have a robust task scheduling setup that keeps your VPS running smoothly on autopilot.

Prerequisites

Before starting, you need:

MassiveGRID Ubuntu VPS — Ubuntu 24.04 LTS pre-installed, Proxmox HA cluster with automatic failover, Ceph 3x replicated NVMe storage, independent CPU/RAM/storage scaling, 12 Tbps DDoS protection, 4 global datacenter locations, 100% uptime SLA, and 24/7 human support rated 9.5/10. Deploy a Cloud VPS from $1.99/mo.

Understanding Cron

Cron is a time-based job scheduler that has been part of Unix-like systems for decades. The cron daemon (cron or crond) runs in the background and checks every minute whether any scheduled jobs need to execute.

Verify that cron is running on your Ubuntu 24.04 server:

systemctl status cron

You should see active (running). If cron is not running, start and enable it:

sudo systemctl start cron
sudo systemctl enable cron

Cron Syntax Explained

A cron job entry consists of five time/date fields followed by the command to execute:

┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 7, where 0 and 7 are Sunday)
│ │ │ │ │
* * * * * command-to-execute

Field Values

Field Allowed Values Special Characters
Minute 0-59 * , - /
Hour 0-23 * , - /
Day of Month 1-31 * , - /
Month 1-12 or JAN-DEC * , - /
Day of Week 0-7 or SUN-SAT * , - /

Special Characters

Practical Examples

# Every minute
* * * * * /path/to/script.sh

# Every 5 minutes
*/5 * * * * /path/to/script.sh

# Every hour at minute 0
0 * * * * /path/to/script.sh

# Every day at 3:30 AM
30 3 * * * /path/to/script.sh

# Every Monday at 6:00 AM
0 6 * * 1 /path/to/script.sh

# Every weekday (Mon-Fri) at 8:00 AM
0 8 * * 1-5 /path/to/script.sh

# First day of every month at midnight
0 0 1 * * /path/to/script.sh

# Every 15 minutes during business hours
*/15 9-17 * * 1-5 /path/to/script.sh

# Every 6 hours
0 */6 * * * /path/to/script.sh

# Twice a day at 6 AM and 6 PM
0 6,18 * * * /path/to/script.sh

# Every Sunday at 2:00 AM
0 2 * * 0 /path/to/script.sh

Shortcut Strings

Cron also supports shortcut strings for common schedules:

Shortcut Equivalent Meaning
@reboot Run once at startup
@yearly 0 0 1 1 * January 1st at midnight
@monthly 0 0 1 * * First day of month at midnight
@weekly 0 0 * * 0 Sunday at midnight
@daily 0 0 * * * Every day at midnight
@hourly 0 * * * * Every hour at minute 0
# Run a backup script at system boot
@reboot /usr/local/bin/startup-backup.sh

# Run a cleanup script every day at midnight
@daily /usr/local/bin/cleanup.sh

Creating and Managing User Crontabs

Every user on the system can have their own crontab. Jobs in a user crontab run with that user's permissions.

Editing Your Crontab

crontab -e

The first time you run this, you'll be asked to choose an editor. Select nano (option 1) if you're not sure. The crontab file opens with comments explaining the syntax. Add your entries at the bottom.

Listing Current Crontab Entries

crontab -l

Removing Your Crontab

# Remove all entries (asks for confirmation)
crontab -r

# Remove without confirmation (use carefully)
crontab -ri

Editing Another User's Crontab

As root, you can manage any user's crontab:

# View another user's crontab
sudo crontab -u www-data -l

# Edit another user's crontab
sudo crontab -u www-data -e

System-Wide Crontabs

System-wide cron configuration differs from user crontabs in two important ways: the files live in specific directories, and each entry includes a username field.

The /etc/crontab File

The main system crontab at /etc/crontab has an extra field for the username:

# /etc/crontab
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m  h  dom mon dow user    command
17 *  * * *   root    cd / && run-parts --report /etc/cron.hourly
25 6  * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6  * * 7   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6  1 * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

Drop-in Directories

Ubuntu uses drop-in directories for system cron jobs. Place executable scripts in these directories:

Directory Frequency
/etc/cron.hourly/ Every hour
/etc/cron.daily/ Every day
/etc/cron.weekly/ Every week
/etc/cron.monthly/ Every month

Scripts in these directories must be executable and should not have a file extension (no .sh). The run-parts command that processes these directories skips files with extensions by default.

# Create a daily cleanup script
sudo nano /etc/cron.daily/cleanup-tmp
#!/bin/bash
# Remove files older than 7 days from /tmp
find /tmp -type f -mtime +7 -delete 2>/dev/null
exit 0
sudo chmod +x /etc/cron.daily/cleanup-tmp

The /etc/cron.d/ Directory

For more granular control, place crontab-format files in /etc/cron.d/. These use the same syntax as /etc/crontab (including the username field):

sudo nano /etc/cron.d/database-maintenance
# Database vacuum every Sunday at 3 AM
0 3 * * 0 postgres /usr/local/bin/db-vacuum.sh

# Database stats update every 6 hours
0 */6 * * * postgres /usr/local/bin/db-analyze.sh

Files in /etc/cron.d/ can have any name and should have mode 644.

Cron Environment Gotchas

One of the most common sources of cron job failures is the limited environment. Cron jobs do not run in a full shell session — they have a minimal PATH and no access to environment variables from your .bashrc or .profile.

The PATH Problem

By default, cron's PATH is typically /usr/bin:/bin. Commands in /usr/local/bin, /snap/bin, or custom locations won't be found. Always use full paths in cron jobs:

# BAD — may fail because 'certbot' is not in cron's PATH
0 3 * * * certbot renew

# GOOD — use the full path
0 3 * * * /usr/bin/certbot renew

# GOOD — or set PATH at the top of your crontab
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
0 3 * * * certbot renew

Find the full path of any command with which:

which certbot
# /snap/bin/certbot

which pg_dump
# /usr/bin/pg_dump

Environment Variables

You can set environment variables at the top of a crontab. They apply to all subsequent entries:

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAILTO=admin@yourdomain.com
HOME=/home/deploy

# Your cron jobs below
0 3 * * * /usr/local/bin/backup.sh

If your script depends on environment variables from a file, source them explicitly:

0 3 * * * . /home/deploy/.env && /usr/local/bin/backup.sh

Common Automation Examples

Here are practical cron job recipes for common Ubuntu VPS administration tasks.

Automated Database Backups

Create a script that dumps your PostgreSQL database daily:

sudo nano /usr/local/bin/backup-database.sh
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/var/backups/postgresql"
DATE=$(date +%Y-%m-%d_%H%M%S)
DB_NAME="myapp"
RETENTION_DAYS=14

mkdir -p "$BACKUP_DIR"

# Dump the database
pg_dump -U postgres "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"

# Remove backups older than retention period
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete

echo "Backup completed: ${DB_NAME}_${DATE}.sql.gz"
sudo chmod +x /usr/local/bin/backup-database.sh

Schedule it to run daily at 2:00 AM:

0 2 * * * /usr/local/bin/backup-database.sh >> /var/log/db-backup.log 2>&1

For MySQL/MariaDB, replace pg_dump with mysqldump:

mysqldump -u root --single-transaction "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"

For a comprehensive backup strategy including offsite storage and encryption, see our Ubuntu VPS automatic backups guide.

Log Rotation and Cleanup

While Ubuntu's logrotate handles most system logs, application logs often need custom cleanup:

sudo nano /usr/local/bin/cleanup-logs.sh
#!/bin/bash
set -euo pipefail

# Compress application logs older than 1 day
find /var/www/myapp/logs -name "*.log" -mtime +1 -exec gzip {} \;

# Remove compressed logs older than 30 days
find /var/www/myapp/logs -name "*.log.gz" -mtime +30 -delete

# Truncate large active log files (keep last 10000 lines)
for logfile in /var/www/myapp/logs/*.log; do
    if [ -f "$logfile" ] && [ $(wc -l < "$logfile") -gt 50000 ]; then
        tail -n 10000 "$logfile" > "$logfile.tmp"
        mv "$logfile.tmp" "$logfile"
    fi
done
sudo chmod +x /usr/local/bin/cleanup-logs.sh
# Run daily at 4:00 AM
0 4 * * * /usr/local/bin/cleanup-logs.sh >> /var/log/cleanup.log 2>&1

SSL Certificate Renewal Check

Certbot handles automatic renewal, but it's good practice to verify. See our Let's Encrypt SSL guide for the full setup:

# Test renewal twice daily (Certbot's default)
0 */12 * * * /snap/bin/certbot renew --quiet --deploy-hook "systemctl reload nginx"

System Updates Check

Check for available updates and optionally apply security patches:

sudo nano /usr/local/bin/check-updates.sh
#!/bin/bash
set -euo pipefail

LOGFILE="/var/log/update-check.log"

echo "=== Update check: $(date) ===" >> "$LOGFILE"

# Update package lists
apt-get update -qq >> "$LOGFILE" 2>&1

# List available upgrades
UPDATES=$(apt-get -s upgrade 2>/dev/null | grep "^Inst" | wc -l)
SECURITY=$(apt-get -s upgrade 2>/dev/null | grep "^Inst" | grep -i security | wc -l)

echo "Available updates: $UPDATES (security: $SECURITY)" >> "$LOGFILE"

# Apply security updates automatically (optional)
if [ "$SECURITY" -gt 0 ]; then
    DEBIAN_FRONTEND=noninteractive apt-get upgrade -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" >> "$LOGFILE" 2>&1
    echo "Security updates applied." >> "$LOGFILE"
fi
sudo chmod +x /usr/local/bin/check-updates.sh
# Run daily at 5:00 AM as root
0 5 * * * root /usr/local/bin/check-updates.sh

Disk Space Monitoring

sudo nano /usr/local/bin/check-disk.sh
#!/bin/bash
THRESHOLD=85
EMAIL="admin@yourdomain.com"

USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')

if [ "$USAGE" -gt "$THRESHOLD" ]; then
    HOSTNAME=$(hostname)
    df -h | mail -s "Disk space warning on $HOSTNAME: ${USAGE}% used" "$EMAIL"
fi
sudo chmod +x /usr/local/bin/check-disk.sh
# Check every hour
0 * * * * /usr/local/bin/check-disk.sh

For a complete monitoring setup with dashboards and alerting, see our Ubuntu VPS monitoring guide.

Docker Container Health Checks

If you're running Docker containers (see our Docker installation guide), schedule health checks and cleanup:

# Remove unused Docker images and volumes every Sunday at 3 AM
0 3 * * 0 /usr/bin/docker system prune -af --volumes >> /var/log/docker-prune.log 2>&1

# Check if critical containers are running every 5 minutes
*/5 * * * * /usr/local/bin/check-containers.sh
sudo nano /usr/local/bin/check-containers.sh
#!/bin/bash
CONTAINERS="nginx postgres redis myapp"
EMAIL="admin@yourdomain.com"

for container in $CONTAINERS; do
    if ! docker ps --format '{{.Names}}' | grep -q "^${container}$"; then
        echo "Container '$container' is not running on $(hostname)!" \
            | mail -s "Docker Alert: $container down" "$EMAIL"
        # Attempt restart
        docker start "$container" 2>/dev/null
    fi
done
sudo chmod +x /usr/local/bin/check-containers.sh

Logging and Monitoring Cron Jobs

Redirecting Output

By default, cron emails any output (stdout and stderr) to the crontab owner. To redirect output to a log file instead:

# Redirect both stdout and stderr to a log file
0 3 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

# Redirect stdout to log, stderr to separate error log
0 3 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>> /var/log/backup-error.log

# Discard all output (not recommended — hides errors)
0 3 * * * /usr/local/bin/backup.sh > /dev/null 2>&1

Best practice: Always capture output somewhere. Discarding all output with > /dev/null 2>&1 means you'll never know when a job fails silently.

Cron Logging in Syslog

Cron logs to syslog when jobs start. View cron activity:

grep CRON /var/log/syslog | tail -20

This shows when each cron job was executed, but not whether it succeeded. For success/failure tracking, use the logging patterns above.

Timestamped Log Entries

Add timestamps to your log entries using a wrapper:

0 3 * * * /usr/local/bin/backup.sh 2>&1 | while IFS= read -r line; do echo "$(date '+%Y-%m-%d %H:%M:%S') $line"; done >> /var/log/backup.log

Or add timestamps inside your scripts:

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"
}

log "Starting backup..."
# ... backup commands ...
log "Backup completed successfully."

Error Handling and Email Notifications

MAILTO Variable

Cron can email job output to any address. Set MAILTO at the top of your crontab:

MAILTO=admin@yourdomain.com

# All output from these jobs will be emailed
0 3 * * * /usr/local/bin/backup.sh
0 4 * * * /usr/local/bin/cleanup.sh

To disable email for all jobs, set MAILTO to an empty string:

MAILTO=""

To disable email for a specific job, redirect its output:

MAILTO=admin@yourdomain.com
0 3 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1
0 4 * * * /usr/local/bin/noisy-script.sh > /dev/null 2>&1

Setting Up Mail Delivery

For MAILTO to work, you need a mail transfer agent (MTA) installed. Install a lightweight MTA:

sudo apt install -y msmtp msmtp-mta

Configure msmtp to use an external SMTP server:

sudo nano /etc/msmtprc
defaults
auth           on
tls            on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile        /var/log/msmtp.log

account        default
host           smtp.gmail.com
port           587
from           your-server@yourdomain.com
user           your-email@gmail.com
password       your-app-password
sudo chmod 600 /etc/msmtprc

Test mail delivery:

echo "Test email from cron setup" | mail -s "Cron Test" admin@yourdomain.com

Error Handling in Scripts

Write robust cron scripts that handle errors properly:

#!/bin/bash
set -euo pipefail

# Configuration
SCRIPT_NAME="database-backup"
EMAIL="admin@yourdomain.com"
LOGFILE="/var/log/${SCRIPT_NAME}.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOGFILE"
}

error_handler() {
    local line_no=$1
    log "ERROR: Script failed at line $line_no"
    echo "Cron job '$SCRIPT_NAME' failed at line $line_no on $(hostname). Check $LOGFILE for details." \
        | mail -s "CRON FAILED: $SCRIPT_NAME" "$EMAIL"
    exit 1
}

trap 'error_handler $LINENO' ERR

log "Starting $SCRIPT_NAME..."

# Your commands here
pg_dump -U postgres myapp | gzip > /var/backups/myapp_$(date +%Y%m%d).sql.gz

log "$SCRIPT_NAME completed successfully."

Systemd Timers: The Modern Alternative

Systemd timers are the modern replacement for cron on Ubuntu. They offer several advantages over cron:

A systemd timer requires two unit files: a .service file defining what to run and a .timer file defining when to run it.

Creating a Systemd Timer

Let's create a timer that runs a backup script daily at 2:00 AM.

First, create the service unit:

sudo nano /etc/systemd/system/backup-database.service
[Unit]
Description=Daily Database Backup
After=postgresql.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup-database.sh
User=postgres
StandardOutput=journal
StandardError=journal

Now create the timer unit:

sudo nano /etc/systemd/system/backup-database.timer
[Unit]
Description=Run Database Backup Daily

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=300

[Install]
WantedBy=timers.target

Enable and start the timer:

sudo systemctl daemon-reload
sudo systemctl enable backup-database.timer
sudo systemctl start backup-database.timer

Verify the timer is active:

systemctl status backup-database.timer

Timer Options Explained

Option Description
OnCalendar Calendar expression (like cron but more readable)
Persistent=true Run immediately if a scheduled run was missed (e.g., server was off)
RandomizedDelaySec Add random delay up to this value (prevents thundering herd)
OnBootSec Run X seconds after boot
OnUnitActiveSec Run X time after the service last finished
AccuracySec Timer accuracy (default 1 minute, lower for more precise scheduling)

Calendar Expression Syntax

Systemd calendar expressions are more readable than cron syntax:

# Every day at 2:00 AM
OnCalendar=*-*-* 02:00:00

# Every Monday at 6:00 AM
OnCalendar=Mon *-*-* 06:00:00

# Every weekday at 8:00 AM
OnCalendar=Mon..Fri *-*-* 08:00:00

# First day of every month at midnight
OnCalendar=*-*-01 00:00:00

# Every 15 minutes
OnCalendar=*:0/15

# Every 6 hours
OnCalendar=*-*-* 00/6:00:00

# Every Sunday at 3:00 AM
OnCalendar=Sun *-*-* 03:00:00

# Specific dates
OnCalendar=2026-03-15 12:00:00

# Shorthand
OnCalendar=daily
OnCalendar=weekly
OnCalendar=monthly
OnCalendar=hourly

You can test calendar expressions without creating a timer:

systemd-analyze calendar "Mon..Fri *-*-* 08:00:00"

This shows the normalized form and the next trigger time.

Listing All Timers

systemctl list-timers --all

This displays all active and waiting timers, their next scheduled run, and the last time they triggered.

Viewing Timer Logs

# View logs for the service triggered by the timer
journalctl -u backup-database.service

# View only the last run
journalctl -u backup-database.service --since today

# Follow logs in real time
journalctl -u backup-database.service -f

Manually Triggering a Timer's Service

To run the service immediately without waiting for the timer:

sudo systemctl start backup-database.service

Cron vs Systemd Timers: When to Use Which

Feature Cron Systemd Timers
Ease of setup Simple one-liner Requires two unit files
Logging Manual (redirect output) Automatic (journalctl)
Missed job handling Jobs are skipped Persistent=true catches up
Dependencies None Full systemd dependency support
Resource limits None CPU, memory, I/O via cgroups
Random delay Must script manually RandomizedDelaySec
Portability Works on any Unix Systemd-only systems
Best for Quick one-off tasks Production services, complex dependencies

Recommendation: Use cron for simple, quick tasks (running a script daily, cleaning temp files). Use systemd timers for production-critical tasks that need logging, dependency management, and reliability (database backups, service health checks).

Security Considerations

Restricting Cron Access

Control which users can create cron jobs using /etc/cron.allow and /etc/cron.deny:

# Only allow specific users to use cron
sudo nano /etc/cron.allow
root
deploy
www-data

If /etc/cron.allow exists, only listed users can use cron. If it doesn't exist, /etc/cron.deny is checked — users listed there are blocked.

Script Permissions

Ensure cron scripts have proper ownership and permissions:

# Scripts run by root should be owned by root, not world-writable
sudo chown root:root /usr/local/bin/backup.sh
sudo chmod 750 /usr/local/bin/backup.sh

A world-writable script run by root's crontab is a privilege escalation vulnerability. Any user could modify the script to execute arbitrary commands as root.

Audit Cron Jobs Regularly

Review all scheduled tasks periodically:

# List all user crontabs
for user in $(cut -f1 -d: /etc/passwd); do
    crontab_content=$(sudo crontab -u "$user" -l 2>/dev/null)
    if [ -n "$crontab_content" ]; then
        echo "=== Crontab for $user ==="
        echo "$crontab_content"
    fi
done

# List all system cron entries
ls -la /etc/cron.d/ /etc/cron.daily/ /etc/cron.hourly/

# List all systemd timers
systemctl list-timers --all

Troubleshooting Cron Jobs

Job Not Running

If a cron job doesn't seem to execute:

  1. Check cron is running: systemctl status cron
  2. Check syslog for execution: grep CRON /var/log/syslog | tail -20
  3. Verify the crontab entry: crontab -l
  4. Test the command manually: Run it directly in the terminal
  5. Check file permissions: The script must be executable
  6. Check PATH: Use full paths for all commands in the script

Job Runs But Fails

If the cron entry appears in syslog but the task fails:

# Capture all output to debug
* * * * * /usr/local/bin/myscript.sh >> /tmp/cron-debug.log 2>&1

Check the debug log to see what errors occur. Common issues:

Overlapping Jobs

If a cron job takes longer than its interval, multiple instances can overlap. Prevent this with flock:

0 3 * * * /usr/bin/flock -n /var/lock/backup.lock /usr/local/bin/backup-database.sh

The -n flag makes flock exit immediately if the lock is already held, preventing overlap. No wrapper script needed.

Real-World Crontab Example

Here's a complete production crontab for a typical Ubuntu VPS running a web application:

# Environment
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
MAILTO=admin@yourdomain.com

# Database backup — daily at 2:00 AM
0 2 * * * /usr/bin/flock -n /var/lock/db-backup.lock /usr/local/bin/backup-database.sh >> /var/log/db-backup.log 2>&1

# Application log cleanup — daily at 4:00 AM
0 4 * * * /usr/local/bin/cleanup-logs.sh >> /var/log/cleanup.log 2>&1

# SSL certificate renewal check — twice daily
0 */12 * * * /snap/bin/certbot renew --quiet --deploy-hook "systemctl reload nginx"

# Disk space check — every hour
0 * * * * /usr/local/bin/check-disk.sh

# System updates check — daily at 5:00 AM
0 5 * * * /usr/local/bin/check-updates.sh >> /var/log/update-check.log 2>&1

# Docker cleanup — every Sunday at 3:00 AM
0 3 * * 0 /usr/bin/docker system prune -af >> /var/log/docker-prune.log 2>&1

# Container health check — every 5 minutes
*/5 * * * * /usr/local/bin/check-containers.sh > /dev/null 2>&1

# Weekly full backup to offsite storage — Sunday 1:00 AM
0 1 * * 0 /usr/local/bin/offsite-backup.sh >> /var/log/offsite-backup.log 2>&1

Prefer Managed Automation?

If maintaining cron jobs, backup scripts, log rotation, update schedules, and monitoring tasks across your servers is consuming too much of your time, consider MassiveGRID's Managed Dedicated Cloud Servers. The managed service handles all routine server maintenance — automated backups, security patching, log management, uptime monitoring, and 24/7 incident response — so your team can focus on building and shipping features instead of writing cron scripts.

What's Next