Running out of disk space on a VPS is one of those problems that creeps up silently and then hits hard. One day your application throws write errors, your database crashes because it cannot create temporary tables, or deployments fail because there is no room for new files. The fix is rarely "buy more storage" — it is almost always "find and remove the gigabytes of junk you did not know existed." This guide walks through every major space consumer on a typical Ubuntu VPS, shows you exactly how to reclaim storage, and helps you decide when cleaning is not enough and scaling is the right move.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Understanding Your VPS Storage
Before cleaning anything, you need to know what you have and how it is organized. Two commands give you the full picture.
Check Overall Disk Usage with df
The df (disk free) command shows filesystem-level usage:
df -h
Typical output on a VPS:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 50G 38G 10G 79% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/vda15 105M 6.1M 99M 6% /boot/efi
The key line is the root filesystem (/). In this example, 38GB of 50GB is used — 79% full. You want to stay below 80% for comfortable operation, and below 90% to avoid performance degradation on some filesystems.
For a cleaner view that filters out pseudo-filesystems:
df -h --type=ext4 --type=xfs
Check Block Devices with lsblk
The lsblk command shows physical and virtual block devices:
lsblk -f
Output:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
vda
├─vda1 ext4 1.0 a1b2c3d4-... 10G 79% /
├─vda14
└─vda15 vfat FAT16 E1F2-A3B4 99M 6% /boot/efi
This confirms your disk layout. On a MassiveGRID VPS, you will see a single virtual disk (vda) backed by Ceph NVMe storage with 3x replication — meaning your data is already protected at the storage layer.
Finding What Consumes Space
Quick Scan with du
The du (disk usage) command shows directory sizes. Start with a top-level scan:
sudo du -sh /* 2>/dev/null | sort -rh | head -20
Sample output:
15G /var
8.2G /home
6.1G /usr
4.3G /opt
2.1G /snap
1.2G /tmp
512M /root
...
Then drill down into the biggest directories:
sudo du -sh /var/* 2>/dev/null | sort -rh | head -10
8.4G /var/lib
3.2G /var/log
2.1G /var/cache
1.1G /var/tmp
sudo du -sh /var/lib/* 2>/dev/null | sort -rh | head -10
5.8G /var/lib/docker
1.4G /var/lib/mysql
512M /var/lib/apt
320M /var/lib/snapd
Now you know where the space is going. In this example, Docker is the primary consumer — a very common pattern.
Interactive Scanning with ncdu
ncdu is a curses-based disk usage analyzer that lets you navigate directories interactively. It is one of the first tools you should install on any VPS:
sudo apt install ncdu -y
Run a full scan:
sudo ncdu / --exclude /proc --exclude /sys --exclude /dev
This builds an index of the entire filesystem and then presents an interactive view where you can navigate with arrow keys, press d to delete files, and n to sort by name or size. It is dramatically faster than running du repeatedly.
For a specific directory:
sudo ncdu /var/lib/docker
Cleanup: Docker (Often the Biggest Offender)
Docker is frequently the single largest consumer of disk space on a VPS. Unused images, stopped containers, orphaned volumes, and build cache can easily consume 10-20GB. If you are running Docker on your VPS (see our Docker installation guide), this section alone might reclaim most of your space.
See What Docker Is Using
docker system df
Output:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 12 3 4.82GB 3.91GB (81%)
Containers 5 2 234.5MB 189.2MB (80%)
Local Volumes 8 2 2.14GB 1.87GB (87%)
Build Cache 45 0 1.23GB 1.23GB (100%)
For a detailed breakdown:
docker system df -v
Remove Unused Images
Dangling images (layers not tagged and not referenced by any container) accumulate fast:
# Remove dangling images only
docker image prune -f
# Remove ALL images not used by running containers
docker image prune -a -f
To remove images older than a specific age:
# Remove unused images older than 7 days
docker image prune -a -f --filter "until=168h"
Remove Stopped Containers
# List all containers including stopped ones
docker ps -a --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
# Remove all stopped containers
docker container prune -f
Remove Unused Volumes
This is where hidden data lives. Database containers often create volumes that persist after the container is removed:
# List volumes
docker volume ls
# Remove unused volumes (NOT attached to any container)
docker volume prune -f
Warning: Volume pruning is destructive. If you removed a database container but its volume still contains data you need, pruning will delete that data permanently. Always check
docker volume lsbefore pruning.
Clear Build Cache
# Remove all build cache
docker builder prune -a -f
# Remove build cache older than 24 hours
docker builder prune -f --filter "until=24h"
The Nuclear Option
If you want to reclaim everything Docker is not actively using:
docker system prune -a --volumes -f
This removes all stopped containers, all unused networks, all unused images (not just dangling), all unused volumes, and all build cache. On a VPS that has been running Docker for months, this can easily free 5-15GB.
Prevent Future Bloat
Configure Docker's log driver to limit container log sizes. Edit or create /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart Docker to apply:
sudo systemctl restart docker
This limits each container's log to 3 files of 10MB each — a maximum of 30MB per container instead of unbounded growth.
Cleanup: System Logs
System logs are the second most common space consumer. For a deeper dive into log management, see our log management guide.
Journald Logs
Check how much space systemd journal logs are using:
journalctl --disk-usage
Vacuum by time or size:
# Keep only the last 7 days of logs
sudo journalctl --vacuum-time=7d
# Or limit to 500MB total
sudo journalctl --vacuum-size=500M
Make the limit permanent in /etc/systemd/journald.conf:
[Journal]
SystemMaxUse=500M
SystemKeepFree=1G
MaxRetentionSec=2week
Apply the configuration:
sudo systemctl restart systemd-journald
Old Log Files in /var/log
Check for large log files:
sudo find /var/log -type f -name "*.log" -o -name "*.gz" | xargs du -sh | sort -rh | head -20
Common space hogs:
# Compressed old logs (already rotated but not removed)
sudo find /var/log -name "*.gz" -mtime +30 -delete
# Old rotated logs with numeric suffix
sudo find /var/log -name "*.log.[0-9]*" -mtime +14 -delete
# Truncate (not delete) an active large log file
sudo truncate -s 0 /var/log/syslog
Never delete an active log file with
rm. The process writing to it holds a file handle, and the space will not be freed until that process restarts. Usetruncate -s 0instead, which empties the file without removing it.
Cleanup: Old Kernels and Packages
Ubuntu keeps old kernel versions after upgrades. Each kernel version occupies 200-400MB:
# See installed kernels
dpkg --list | grep linux-image
# See which kernel is currently running
uname -r
Remove old kernels and unused packages:
# Remove packages that were installed as dependencies but are no longer needed
sudo apt autoremove --purge -y
# Clean the apt cache (downloaded .deb files)
sudo apt clean
# Remove only obsolete packages from cache
sudo apt autoclean
Check how much space the apt cache is using:
sudo du -sh /var/cache/apt/archives/
On a VPS that has received several months of updates, apt clean can free 500MB to 2GB.
Cleanup: MySQL/MariaDB
If you are running MySQL or MariaDB (see our database tuning guide), several database-specific files can consume significant space.
Binary Logs
Binary logs are used for replication and point-in-time recovery. If you are not using either, they waste space:
# Check binary log usage
sudo mysql -e "SHOW BINARY LOGS;"
Purge old binary logs:
# Purge logs older than 3 days
sudo mysql -e "PURGE BINARY LOGS BEFORE DATE(NOW() - INTERVAL 3 DAY);"
Set automatic expiration in /etc/mysql/mysql.conf.d/mysqld.cnf:
[mysqld]
binlog_expire_logs_seconds = 259200 # 3 days
# Or disable binary logging entirely if not needed:
# skip-log-bin
Slow Query Log
# Check slow query log size
ls -lh /var/log/mysql/mysql-slow.log
# Rotate it
sudo mysqladmin flush-logs
The InnoDB Tablespace (ibdata1)
If you are using innodb_file_per_table=OFF (older default), all InnoDB data lives in a single ibdata1 file that never shrinks:
ls -lh /var/lib/mysql/ibdata1
Modern MySQL uses innodb_file_per_table=ON by default, storing each table in its own .ibd file. If your ibdata1 is large, the only way to shrink it involves dumping all databases, stopping MySQL, deleting the file, restarting, and reimporting — a process best done during a maintenance window.
Cleanup: Application Caches
npm Cache
# Check size
du -sh ~/.npm/
# Clean it
npm cache clean --force
pip Cache
# Check size
du -sh ~/.cache/pip/
# Clean it
pip cache purge
Composer Cache
# Check size
composer clearcache 2>&1 | head -5
# Or directly
du -sh ~/.cache/composer/
rm -rf ~/.cache/composer/
apt Cache (already covered above)
sudo apt clean
Snap Cache
Snap retains old revisions of installed snaps. Each revision can be hundreds of megabytes:
# List snap revisions
snap list --all
# Remove disabled (old) snap revisions
sudo snap list --all | awk '/disabled/{print $1, $3}' | while read name rev; do
sudo snap remove "$name" --revision="$rev"
done
# Limit snap to keep only 2 revisions
sudo snap set system refresh.retain=2
Cleanup: Temporary Files and Old Backups
Temporary files and forgotten backups are silent storage killers:
# Check /tmp and /var/tmp
du -sh /tmp /var/tmp
# Find files in /tmp older than 7 days
sudo find /tmp -type f -atime +7 -delete
# Find large files anywhere on the system (over 100MB)
sudo find / -xdev -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -20
Look for old backup files that were forgotten:
# Common backup patterns
sudo find / -xdev -type f \( \
-name "*.sql.gz" -o \
-name "*.tar.gz" -o \
-name "*.bak" -o \
-name "*.backup" -o \
-name "*.old" \
\) -size +50M 2>/dev/null | xargs ls -lh
If you have automated backups (see our backup automation guide), make sure your retention policy is actually deleting old backups. A common mistake is setting up backups without rotation — daily 500MB database dumps fill a 50GB disk in about 100 days.
User Home Directories
Developers often forget about files accumulating in home directories — build artifacts, downloaded archives, test data, and IDE caches:
# Check all home directories
sudo du -sh /home/* /root 2>/dev/null | sort -rh
Common culprits in home directories:
# .local/share (application data, Trash, etc.)
du -sh ~/.local/share/* 2>/dev/null | sort -rh | head -5
# Core dumps (can be huge)
sudo find / -xdev -name "core" -o -name "core.*" | xargs ls -lh 2>/dev/null
# .cache directories
du -sh ~/.cache/* 2>/dev/null | sort -rh | head -10
# Bash/Zsh history (usually small but check)
ls -lh ~/.bash_history ~/.zsh_history 2>/dev/null
Emergency: Disk Is Already 100% Full
If your disk is completely full and services are crashing, you need to free space immediately. Here is a priority-ordered emergency checklist that works even when the system is barely responsive:
Step 1: Clear the Fastest Targets
# Truncate the largest log files (instant space recovery)
sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/kern.log
sudo truncate -s 0 /var/log/auth.log
# Clear journal logs aggressively
sudo journalctl --vacuum-size=100M
# Clear apt cache
sudo apt clean
Step 2: Find and Remove the Largest Unnecessary Files
# Find the 20 largest files on the system
sudo find / -xdev -type f -size +50M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -20
Look for files you recognize as safe to delete: old backups, core dumps, temporary archives, and downloaded installers.
Step 3: Check for Deleted Files Still Held Open
A common trap: you delete a large file with rm, but a process still has the file handle open. The space is not freed until the process releases the file:
# Find deleted files still consuming space
sudo lsof +L1 2>/dev/null | awk '{print $7, $1, $9}' | sort -rn | head -20
If you see a large deleted file held by a process (like a log file held by rsyslogd), restart that process to release the file handle:
# Restart rsyslog to release deleted log file handles
sudo systemctl restart rsyslog
# Or restart the specific process shown in lsof output
sudo systemctl restart nginx
Step 4: Recover and Prevent
Once you have enough space to operate (below 90%), follow the structured cleanup sections above to do a thorough cleanup. Then set up the monitoring alerts described later in this guide so you never hit 100% again.
Common Space Usage Reference
Here is a reference table of typical space consumption for common services on a VPS. Use this to estimate whether your usage is normal or indicates a problem:
| Component | Normal Usage | Problem Indicator | Common Cause |
|---|---|---|---|
| Ubuntu 24.04 base | 2-4 GB | Over 8 GB | Old kernels, snap cache |
| /var/log | 100-500 MB | Over 2 GB | Missing log rotation, verbose logging |
| Docker (/var/lib/docker) | 2-5 GB | Over 15 GB | Unused images, build cache, container logs |
| MySQL/MariaDB data | Varies by data | ibdata1 over 5 GB (not matching table sizes) | Binary logs, InnoDB tablespace bloat |
| apt cache | 0-200 MB | Over 1 GB | Never ran apt clean |
| Snap | 200-500 MB per snap | Multiple GB with old revisions | Snap retaining old revisions (default: 3) |
| Node.js (node_modules) | 200-800 MB per project | Multiple copies across directories | Duplicate installations, old projects |
| Python virtual environments | 100-500 MB each | Forgotten venvs from old projects | Not cleaning up after project completion |
Setting Up Automatic Log Rotation
The logrotate utility is installed by default on Ubuntu and handles rotation for most system services. However, application logs often need custom rules. Create a configuration file for your application:
sudo nano /etc/logrotate.d/myapp
/home/deploy/myapp/logs/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 0640 deploy deploy
sharedscripts
postrotate
systemctl reload myapp 2>/dev/null || true
endscript
}
Test the configuration:
sudo logrotate -d /etc/logrotate.d/myapp
For a complete guide on setting up log management, including centralized logging and log analysis, see our Ubuntu VPS log management guide.
Monitoring Disk Usage with Alerts
Reactive cleanup is stressful. Set up proactive monitoring so you know when space is getting low before it becomes an emergency.
Simple Disk Usage Alert Script
Create a script that alerts you when disk usage exceeds a threshold:
sudo nano /usr/local/bin/disk-alert.sh
#!/bin/bash
THRESHOLD=80
CURRENT=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
if [ "$CURRENT" -ge "$THRESHOLD" ]; then
echo "WARNING: Disk usage on $(hostname) is ${CURRENT}%
Top space consumers:
$(du -sh /* 2>/dev/null | sort -rh | head -10)
Docker usage:
$(docker system df 2>/dev/null || echo 'Docker not installed')
Large files (>100MB):
$(find / -xdev -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -10)
" | mail -s "DISK ALERT: $(hostname) at ${CURRENT}%" admin@yourcompany.com
fi
sudo chmod +x /usr/local/bin/disk-alert.sh
Schedule it to run every 6 hours via cron (see our cron guide):
sudo crontab -e
0 */6 * * * /usr/local/bin/disk-alert.sh
For comprehensive monitoring including disk usage dashboards and alerting, see our VPS monitoring setup guide.
A Complete Cleanup Script
Here is a script that combines all the cleanup steps. Run it manually or schedule it monthly:
sudo nano /usr/local/bin/vps-cleanup.sh
#!/bin/bash
set -e
echo "=== VPS Disk Cleanup ==="
echo "Before: $(df -h / | tail -1 | awk '{print $3, "used of", $2, "("$5" full)"}')"
echo ""
# APT cleanup
echo "--- APT ---"
sudo apt autoremove --purge -y 2>/dev/null
sudo apt clean
sudo apt autoclean
# Journal logs
echo "--- Journal ---"
sudo journalctl --vacuum-time=7d --vacuum-size=500M
# Old log files
echo "--- Old Logs ---"
sudo find /var/log -name "*.gz" -mtime +30 -delete 2>/dev/null
sudo find /var/log -name "*.log.[0-9]*" -mtime +14 -delete 2>/dev/null
# Temp files
echo "--- Temp Files ---"
sudo find /tmp -type f -atime +7 -delete 2>/dev/null
sudo find /var/tmp -type f -atime +7 -delete 2>/dev/null
# Docker (if installed)
if command -v docker &> /dev/null; then
echo "--- Docker ---"
docker container prune -f
docker image prune -f
docker volume prune -f
docker builder prune -f --filter "until=72h"
fi
# Snap old revisions
if command -v snap &> /dev/null; then
echo "--- Snap ---"
snap list --all | awk '/disabled/{print $1, $3}' | while read name rev; do
sudo snap remove "$name" --revision="$rev" 2>/dev/null
done
fi
# Application caches
echo "--- App Caches ---"
[ -d ~/.npm ] && npm cache clean --force 2>/dev/null
[ -d ~/.cache/pip ] && pip cache purge 2>/dev/null
[ -d ~/.cache/composer ] && rm -rf ~/.cache/composer/ 2>/dev/null
echo ""
echo "After: $(df -h / | tail -1 | awk '{print $3, "used of", $2, "("$5" full)"}')"
echo "Cleanup complete."
sudo chmod +x /usr/local/bin/vps-cleanup.sh
When to Clean vs. When to Scale Storage
Cleaning is the right answer when the space is consumed by temporary, cached, or obsolete data. But there are clear signals that you need more storage:
| Signal | Action |
|---|---|
| Cleanup frees 20%+ space, takes months to fill again | Keep cleaning on schedule — your current storage is fine |
| Cleanup frees space but it fills again within weeks | Your data is genuinely growing — scale storage |
| Most space is actual data (databases, uploads, application files) | Scale storage — there is nothing to clean |
| You are storing backups on the same disk as production data | Move backups off-server (S3-compatible storage, remote server) |
| Database files dominate and the database cannot be significantly shrunk | Scale storage or move the database to a dedicated server |
On a MassiveGRID Cloud VPS, if cleanup gives temporary relief but the trend is upward, add storage independently — you do not have to change your CPU or RAM allocation. Storage scales separately from compute, so you only pay for what you need.
If I/O-heavy cleanup operations like Docker pruning, database maintenance, or large find scans slow down your production workload, that is a sign you need isolated resources. A Dedicated VPS ensures that maintenance operations do not impact application performance because your CPU and I/O are not shared with other tenants.
Prefer Managed Storage?
If you would rather not think about disk space management at all — no cleanup scripts, no monitoring alerts, no manual intervention — MassiveGRID Managed Dedicated Servers include proactive storage monitoring and management. The operations team handles log rotation, automated cleanup, and storage scaling before you ever notice a problem.
For most self-managed VPS users, though, the combination of monthly cleanup (run the script above), proper log rotation, Docker log limits, and a simple disk alert is enough to keep storage under control indefinitely. The key is making it automatic — the disk space problem only becomes a crisis when nobody is watching.