Every developer eventually learns the hard way: containers are ephemeral. You spend hours configuring a database, importing data, tuning settings — then the container stops and everything vanishes. Docker volumes solve this problem by providing persistent storage that lives outside the container filesystem, surviving restarts, recreation, and even host reboots. This guide covers everything you need to know about Docker volumes on an Ubuntu VPS — from basic creation to backup strategies, migration workflows, and storage management.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

Why Containers Lose Data

To understand volumes, you first need to understand why containers lose data. Every Docker container starts from an image — a read-only template containing the filesystem. When a container runs, Docker adds a thin writable layer on top of the image using a union filesystem (overlay2 on modern Ubuntu). Every file you create, modify, or delete inside the container happens in this writable layer.

The critical detail: the writable layer is tied to the container lifecycle. When you run docker rm, the writable layer is deleted. When you run docker run with the same image, you get a fresh writable layer with no trace of the previous container's changes.

This design is intentional. Containers are meant to be disposable — you should be able to destroy and recreate them freely. But it creates a problem for anything that needs to persist: databases, uploaded files, configuration, logs, and application state.

# Start a PostgreSQL container and create data
docker run -d --name test-db postgres:16
docker exec test-db psql -U postgres -c "CREATE DATABASE myapp;"
docker exec test-db psql -U postgres -c "SELECT datname FROM pg_database;"

# Now remove and recreate
docker rm -f test-db
docker run -d --name test-db postgres:16
docker exec test-db psql -U postgres -c "SELECT datname FROM pg_database;"
# myapp database is gone — the container started fresh

Docker provides three mechanisms to persist data beyond the container lifecycle: volumes, bind mounts, and tmpfs mounts. Each has different characteristics and use cases.

Volumes vs Bind Mounts vs tmpfs

Docker supports three types of mounts. The differences matter for performance, portability, and management.

Feature Named Volume Bind Mount tmpfs Mount
Storage location /var/lib/docker/volumes/ Any host path Host memory (RAM)
Managed by Docker Yes No No
Pre-populated with image data Yes (on first use) No (host path overwrites) No
Survives container removal Yes Yes (host files remain) No
Backup with Docker CLI Yes Standard file tools N/A
Performance Native filesystem speed Native filesystem speed RAM speed
Portable across hosts Yes (with migration) Depends on host paths No
Best for Databases, app data Dev code, config files Secrets, temp caches

Named volumes are the recommended approach for production data. Docker manages the storage location, handles permissions, and provides CLI tools for inspection and backup. When you mount a named volume to a container path that contains data in the image (like /var/lib/postgresql/data), Docker copies the image data into the volume on first use.

Bind mounts map a specific host directory into the container. They are ideal for development — mount your source code directory so changes on the host appear instantly in the container. However, they depend on the host filesystem structure, making them less portable.

tmpfs mounts store data in host memory. Data is fast but disappears when the container stops. Use tmpfs for sensitive data you don't want written to disk (session tokens, temporary encryption keys) or for high-speed temporary caches.

Creating and Managing Named Volumes

Named volumes are the primary tool for persistent container data. Here is the complete lifecycle.

Creating Volumes

# Create a named volume
docker volume create pgdata

# Create with specific driver options
docker volume create --driver local \
  --opt type=none \
  --opt device=/mnt/fast-storage/pgdata \
  --opt o=bind \
  pgdata-custom

# List all volumes
docker volume ls

Output from docker volume ls shows the driver and volume name:

DRIVER    VOLUME NAME
local     pgdata
local     pgdata-custom
local     redis-data
local     app-uploads

Inspecting Volumes

# Show volume details
docker volume inspect pgdata

This returns JSON with the mount point and creation time:

{
    "CreatedAt": "2026-02-28T10:15:30Z",
    "Driver": "local",
    "Labels": {},
    "Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
    "Name": "pgdata",
    "Options": {},
    "Scope": "local"
}

The Mountpoint is the actual directory on the host filesystem. You can access files there directly as root, but using Docker commands is the recommended approach.

Using Volumes with Containers

# Mount a named volume using -v flag
docker run -d --name postgres \
  -v pgdata:/var/lib/postgresql/data \
  -e POSTGRES_PASSWORD=secretpass \
  postgres:16

# Mount using the --mount flag (more explicit, recommended)
docker run -d --name postgres \
  --mount type=volume,source=pgdata,target=/var/lib/postgresql/data \
  -e POSTGRES_PASSWORD=secretpass \
  postgres:16

# Mount as read-only (useful for config)
docker run -d --name app \
  -v app-config:/etc/app/config:ro \
  myapp:latest

Removing Volumes

# Remove a specific volume (must not be in use)
docker volume rm pgdata

# Remove all unused volumes (CAREFUL — see warnings below)
docker volume prune

# Remove with force (skip confirmation)
docker volume prune -f

Warning: docker volume prune deletes every volume not currently attached to a running container. If you stop a database container for maintenance, its volume becomes "unused" and eligible for pruning. Always verify with docker volume ls before pruning.

Bind Mounts for Development and Configuration

Bind mounts map a host directory directly into the container. They are essential for development workflows where you need live code reloading, and for injecting configuration files into containers.

Development Bind Mounts

# Mount source code for development
docker run -d --name dev-app \
  -v /home/deploy/myapp/src:/app/src \
  -v /home/deploy/myapp/package.json:/app/package.json \
  -p 3000:3000 \
  node:20 npm run dev

# Mount with specific permissions
docker run -d --name dev-app \
  --mount type=bind,source=/home/deploy/myapp,target=/app,readonly=false \
  -p 3000:3000 \
  node:20 npm run dev

Configuration Bind Mounts

# Inject Nginx config
docker run -d --name nginx \
  -v /home/deploy/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /home/deploy/nginx/sites:/etc/nginx/conf.d:ro \
  -v /home/deploy/certs:/etc/ssl/certs:ro \
  -p 80:80 -p 443:443 \
  nginx:alpine

# Inject application environment
docker run -d --name app \
  -v /home/deploy/app/.env:/app/.env:ro \
  myapp:latest

Notice the :ro suffix — this mounts the files as read-only inside the container. Use this for configuration files to prevent the container from accidentally modifying host files.

Bind Mount Permissions

A common issue: the container process runs as a different user than the host file owner, causing permission denied errors. Fix this by matching UIDs:

# Check the UID inside the container
docker run --rm postgres:16 id
# uid=999(postgres) gid=999(postgres)

# Set host directory ownership to match
sudo chown -R 999:999 /home/deploy/pgdata

# Or run the container with a specific user
docker run -d --name app \
  --user "$(id -u):$(id -g)" \
  -v /home/deploy/app-data:/data \
  myapp:latest

Volume Drivers and Storage Locations

By default, Docker stores named volumes under /var/lib/docker/volumes/ on the host. Each volume gets its own directory with a _data subdirectory containing the actual files.

# View the volume storage hierarchy
sudo ls -la /var/lib/docker/volumes/

# Output:
# drwx-----x  3 root root 4096 Feb 28 10:15 pgdata
# drwx-----x  3 root root 4096 Feb 28 10:20 redis-data
# drwx-----x  3 root root 4096 Feb 28 10:25 app-uploads
# -rw-------  1 root root 32768 Feb 28 10:30 metadata.db

# View actual data inside a volume
sudo ls -la /var/lib/docker/volumes/pgdata/_data/

The local driver is the default and stores data on the host filesystem. Docker also supports third-party volume drivers for network storage, cloud storage, and distributed filesystems. On an Ubuntu VPS, the local driver is almost always the right choice — your volume data resides on the same high-performance storage as the rest of your system.

Changing the Docker Data Root

If your root partition is small, you may want to move Docker's storage to a different mount point. If you have followed our guide on managing disk space on Ubuntu VPS, you understand how disk partitions work.

# Stop Docker
sudo systemctl stop docker

# Edit or create Docker daemon config
sudo nano /etc/docker/daemon.json
{
  "data-root": "/mnt/storage/docker"
}
# Copy existing data to new location
sudo rsync -aP /var/lib/docker/ /mnt/storage/docker/

# Start Docker
sudo systemctl start docker

# Verify
docker info | grep "Docker Root Dir"

Volumes in Docker Compose

Docker Compose makes volume management declarative. You define volumes in docker-compose.yml and Compose handles creation, naming, and attachment. If you haven't set up Docker yet, start with our Docker installation guide.

Named Volumes in Compose

version: "3.8"

services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis-data:/data
    restart: unless-stopped

  app:
    image: myapp:latest
    volumes:
      - app-uploads:/app/uploads
      - ./config/app.conf:/app/config/app.conf:ro
    depends_on:
      - postgres
      - redis
    restart: unless-stopped

volumes:
  pgdata:
  redis-data:
  app-uploads:

Compose prefixes volume names with the project name (usually the directory name). A volume defined as pgdata in a project directory called myapp becomes myapp_pgdata. Check with docker volume ls.

External Volumes

External volumes are created outside of Compose and referenced by name. This is useful when multiple Compose projects share data, or when you want volumes to survive docker compose down -v.

# Create the volume manually first
docker volume create shared-assets

# Reference it in docker-compose.yml
volumes:
  shared-assets:
    external: true

Shared Volumes Between Services

Multiple services can mount the same volume. A common pattern is an app container that writes uploaded files and an Nginx container that serves them:

version: "3.8"

services:
  app:
    image: myapp:latest
    volumes:
      - uploads:/app/public/uploads
    restart: unless-stopped

  nginx:
    image: nginx:alpine
    volumes:
      - uploads:/usr/share/nginx/html/uploads:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - "80:80"
    restart: unless-stopped

volumes:
  uploads:

The :ro on the Nginx mount ensures Nginx can read uploaded files but cannot modify them — only the app service writes to the volume.

Volume Configuration Options

volumes:
  pgdata:
    driver: local
    driver_opts:
      type: none
      device: /mnt/ssd/postgres
      o: bind
    labels:
      com.myapp.description: "PostgreSQL data"
      com.myapp.backup: "daily"

Backing Up Docker Volumes

This is the most important section in this guide. Volumes contain your persistent data — databases, uploads, configuration. Losing volume data means losing your application state. Our automated backup guide covers system-level backups, but Docker volumes need specific handling.

Method 1: Tar Archive with a Temporary Container

The standard approach: run a temporary container that mounts the volume and creates a tar archive.

# Backup a volume to a tar file
docker run --rm \
  -v pgdata:/source:ro \
  -v /home/deploy/backups:/backup \
  alpine tar czf /backup/pgdata-$(date +%Y%m%d-%H%M%S).tar.gz -C /source .

# Restore from tar
docker run --rm \
  -v pgdata:/target \
  -v /home/deploy/backups:/backup \
  alpine sh -c "cd /target && tar xzf /backup/pgdata-20260228-101500.tar.gz"

This method works for any volume. The :ro flag on the source mount prevents accidental modification during backup.

Method 2: Database-Specific Dumps

For databases, logical dumps are often better than raw file copies because they produce portable, version-independent backups. See our guides on PostgreSQL setup and MySQL/MariaDB tuning for more on database management.

# PostgreSQL dump
docker exec postgres pg_dumpall -U postgres > /home/deploy/backups/pg-all-$(date +%Y%m%d).sql

# MySQL/MariaDB dump
docker exec mariadb mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" --all-databases > /home/deploy/backups/mysql-all-$(date +%Y%m%d).sql

# Redis backup (trigger save, then copy)
docker exec redis redis-cli BGSAVE
docker cp redis:/data/dump.rdb /home/deploy/backups/redis-$(date +%Y%m%d).rdb

Method 3: Volume Backup Script

A reusable script that backs up all named volumes:

#!/bin/bash
# backup-volumes.sh — backup all Docker named volumes

BACKUP_DIR="/home/deploy/backups/volumes"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=7

mkdir -p "$BACKUP_DIR"

# Get all volume names
VOLUMES=$(docker volume ls -q)

for VOLUME in $VOLUMES; do
    echo "Backing up volume: $VOLUME"
    docker run --rm \
        -v "$VOLUME":/source:ro \
        -v "$BACKUP_DIR":/backup \
        alpine tar czf "/backup/${VOLUME}-${TIMESTAMP}.tar.gz" -C /source .

    if [ $? -eq 0 ]; then
        echo "  ✓ Backed up: ${VOLUME}-${TIMESTAMP}.tar.gz"
    else
        echo "  ✗ Failed: $VOLUME"
    fi
done

# Remove backups older than retention period
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "Cleaned up backups older than $RETENTION_DAYS days"

# Show backup sizes
echo ""
echo "Current backups:"
du -sh "$BACKUP_DIR"/*.tar.gz 2>/dev/null | sort -rh
# Make executable and schedule
chmod +x /home/deploy/backup-volumes.sh

# Add to crontab — daily at 3 AM
crontab -e
# 0 3 * * * /home/deploy/backup-volumes.sh >> /home/deploy/backups/backup.log 2>&1

For more on scheduling automated tasks, see our cron jobs and task scheduling guide.

Method 4: docker cp

For quick, one-off copies of specific files from a running container:

# Copy a file from container to host
docker cp postgres:/var/lib/postgresql/data/postgresql.conf ./postgresql.conf

# Copy a directory
docker cp app:/app/uploads/ /home/deploy/backups/uploads/

# Copy from host to container
docker cp ./custom.conf postgres:/var/lib/postgresql/data/postgresql.conf

Migrating Volumes Between Servers

When you outgrow your current server or need to move to a different datacenter, volume migration is a critical step. If you are planning a broader migration strategy, our disaster recovery guide covers the full picture.

Method 1: tar + rsync (Recommended)

# On the source server: create tar archives
docker run --rm \
  -v pgdata:/source:ro \
  -v /tmp/migration:/backup \
  alpine tar czf /backup/pgdata.tar.gz -C /source .

docker run --rm \
  -v redis-data:/source:ro \
  -v /tmp/migration:/backup \
  alpine tar czf /backup/redis-data.tar.gz -C /source .

# Transfer to destination server
rsync -avz --progress /tmp/migration/ deploy@new-server:/tmp/migration/

# On the destination server: create volumes and restore
docker volume create pgdata
docker volume create redis-data

docker run --rm \
  -v pgdata:/target \
  -v /tmp/migration:/backup \
  alpine sh -c "cd /target && tar xzf /backup/pgdata.tar.gz"

docker run --rm \
  -v redis-data:/target \
  -v /tmp/migration:/backup \
  alpine sh -c "cd /target && tar xzf /backup/redis-data.tar.gz"

Method 2: Direct rsync Over SSH

For large volumes where you want incremental transfers:

# Sync volume data directly between servers (requires root or matching permissions)
sudo rsync -avz --progress \
  /var/lib/docker/volumes/pgdata/_data/ \
  root@new-server:/var/lib/docker/volumes/pgdata/_data/

Important: Stop the containers using the volumes before migrating to ensure data consistency. Running database containers may have data in memory buffers that hasn't been flushed to disk.

Method 3: Database-Native Replication

For databases, setting up replication between old and new servers enables zero-downtime migration:

# On the new server, set up PostgreSQL streaming replication
# This keeps the new server in sync until you're ready to switch
docker run -d --name postgres-replica \
  -e POSTGRES_PASSWORD=secretpass \
  -v pgdata:/var/lib/postgresql/data \
  postgres:16

# Configure as replica of the source server
docker exec postgres-replica psql -U postgres -c \
  "ALTER SYSTEM SET primary_conninfo = 'host=old-server port=5432 user=replicator password=replpass';"

Volume Size Management

Volumes grow over time. Database write-ahead logs, uploaded files, and application logs accumulate. Monitoring volume sizes prevents surprise disk full errors.

Finding Volume Sizes

# Size of all volumes
sudo du -sh /var/lib/docker/volumes/*/

# Output:
# 2.4G    /var/lib/docker/volumes/pgdata/
# 156M    /var/lib/docker/volumes/redis-data/
# 890M    /var/lib/docker/volumes/app-uploads/
# 45M     /var/lib/docker/volumes/nginx-logs/

# Size of a specific volume
sudo du -sh /var/lib/docker/volumes/pgdata/_data/

# Total Docker disk usage (images, containers, volumes, build cache)
docker system df

# Detailed breakdown
docker system df -v

The docker system df command provides a high-level overview:

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          12        5         3.4GB     1.8GB (52%)
Containers      5         5         120MB     0B (0%)
Local Volumes   4         4         3.5GB     0B (0%)
Build Cache     23        0         890MB     890MB

Cleaning Up Large Volumes

# Find large files inside a volume
sudo find /var/lib/docker/volumes/pgdata/_data/ -type f -size +100M -exec ls -lh {} \;

# For PostgreSQL: clean up WAL files (inside the container)
docker exec postgres psql -U postgres -c "SELECT pg_size_pretty(pg_database_size('myapp'));"

# Clean old WAL segments
docker exec postgres pg_archivecleanup /var/lib/postgresql/data/pg_wal 000000010000000000000010

# For application volumes: find old files
sudo find /var/lib/docker/volumes/app-uploads/_data/ -type f -mtime +90 -ls

Docker System Prune — What Gets Deleted

Docker provides several prune commands to reclaim disk space. Understanding exactly what each command deletes is critical — accidentally pruning a volume means losing data permanently.

Command What Gets Deleted What Stays Safe
docker container prune All stopped containers Running containers, images, volumes
docker image prune Dangling images (untagged) Tagged images, containers, volumes
docker image prune -a All unused images Images used by running containers
docker volume prune All volumes not mounted to a running container Volumes attached to running containers
docker network prune All unused networks Default networks, networks in use
docker system prune Stopped containers + dangling images + unused networks Volumes (not included by default)
docker system prune -a --volumes Everything unused: containers, images, networks, AND volumes Only resources attached to running containers

Critical warning: Never run docker system prune --volumes without verifying what will be deleted. If you stopped a database container for maintenance, its volume is "unused" and will be deleted. Always run docker volume ls and docker ps -a first to verify what's running.

The safe cleanup approach:

# Step 1: See what's running and stopped
docker ps -a

# Step 2: See all volumes and which are in use
docker volume ls
docker ps --format '{{.Mounts}}'

# Step 3: Remove only what you're sure about
docker container prune  # Remove stopped containers
docker image prune      # Remove dangling images only
# DO NOT prune volumes unless you've verified they're expendable

# Step 4: Clean build cache (always safe)
docker builder prune

For a thorough approach to disk management on your VPS, see our guide on managing disk space.

Volume Performance Considerations

Volume performance depends on the underlying storage. I/O-intensive workloads — databases, search indexes, log aggregation — benefit from fast storage and dedicated resources.

Monitoring Volume I/O

# Monitor container I/O in real time
docker stats --format "table {{.Name}}\t{{.BlockIO}}\t{{.CPUPerc}}\t{{.MemUsage}}"

# Check disk I/O at the system level
iostat -xz 1 5

# Monitor specific volume path I/O
sudo iotop -o -d 2

If you're running Prometheus and Grafana, you can set up dashboards to track volume I/O over time and alert on throughput degradation.

Volume Mount Options for Performance

# Use delegated consistency for macOS development (not needed on Linux VPS)
# On Linux, volumes use native filesystem performance by default

# For databases: ensure data directory has proper filesystem options
# Check filesystem type
df -T /var/lib/docker/volumes/

# PostgreSQL performance: set noatime on the volume mount
# In /etc/fstab for a dedicated partition:
# /dev/vdb1 /mnt/docker-data ext4 defaults,noatime,nodiratime 0 2

Docker Volumes on Ceph NVMe Storage

Docker volumes on a MassiveGRID Cloud VPS are stored on Ceph 3x replicated NVMe. Data survives container restarts and hardware events. The Ceph storage cluster replicates every write to three independent NVMe drives across different physical nodes — if a drive or node fails, your volume data remains intact and accessible without intervention.

This infrastructure-level replication works transparently beneath Docker. Your volumes operate as standard local volumes, but with enterprise-grade data protection. There is no additional configuration required — every docker volume create command automatically benefits from the underlying Ceph replication.

For most containerized applications — web applications, content management systems, lightweight databases, Redis caches — a Cloud VPS provides excellent volume performance on shared NVMe storage. The Ceph cluster distributes I/O across the storage network, providing consistent performance for typical workloads.

I/O-Heavy Volume Workloads

Database containers with heavy write workloads need consistent I/O. Dedicated I/O ensures volume writes don't compete. When your PostgreSQL container handles hundreds of transactions per second, or your Elasticsearch instance indexes thousands of documents per minute, I/O consistency becomes critical.

Symptoms of I/O contention on shared resources include:

For these workloads, Cloud VDS with dedicated resources provides isolated I/O bandwidth. Your volume writes don't compete with other tenants, delivering the consistent latency that databases require.

You can benchmark your current volume I/O to determine if you need dedicated resources:

# Test write performance inside a container volume
docker run --rm -v pgdata:/data alpine sh -c \
  "dd if=/dev/zero of=/data/testfile bs=1M count=1024 conv=fdatasync 2>&1 | tail -1"

# Test random I/O with fio
docker run --rm -v pgdata:/data ljishen/fio \
  --name=randwrite --ioengine=libaio --rw=randwrite \
  --bs=4k --numjobs=4 --size=256M --runtime=30 \
  --directory=/data --group_reporting

If your random 4K write latency exceeds 1ms consistently, or your sequential throughput varies by more than 30% between runs, dedicated resources will improve your application's consistency.

Putting It All Together

Here is a complete Docker Compose setup demonstrating all volume concepts — named volumes for databases, bind mounts for configuration, shared volumes between services, and a backup sidecar container:

version: "3.8"

services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./config/postgresql.conf:/etc/postgresql/postgresql.conf:ro
    command: postgres -c config_file=/etc/postgresql/postgresql.conf
    secrets:
      - db_password
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes --maxmemory 256mb
    volumes:
      - redis-data:/data
    restart: unless-stopped

  app:
    image: myapp:latest
    volumes:
      - uploads:/app/public/uploads
      - ./config/app.env:/app/.env:ro
    depends_on:
      - postgres
      - redis
    restart: unless-stopped

  nginx:
    image: nginx:alpine
    volumes:
      - uploads:/usr/share/nginx/html/uploads:ro
      - ./config/nginx.conf:/etc/nginx/nginx.conf:ro
      - nginx-logs:/var/log/nginx
    ports:
      - "80:80"
      - "443:443"
    restart: unless-stopped

  backup:
    image: alpine
    volumes:
      - pgdata:/source/pgdata:ro
      - redis-data:/source/redis-data:ro
      - uploads:/source/uploads:ro
      - /home/deploy/backups:/backups
    entrypoint: /bin/sh
    command: -c "while true; do
      echo 'Starting backup at '$(date);
      tar czf /backups/pgdata-$$(date +%Y%m%d).tar.gz -C /source/pgdata .;
      tar czf /backups/redis-$$(date +%Y%m%d).tar.gz -C /source/redis-data .;
      tar czf /backups/uploads-$$(date +%Y%m%d).tar.gz -C /source/uploads .;
      find /backups -name '*.tar.gz' -mtime +7 -delete;
      echo 'Backup complete, sleeping 24h';
      sleep 86400;
      done"
    restart: unless-stopped

secrets:
  db_password:
    file: ./secrets/db_password.txt

volumes:
  pgdata:
  redis-data:
  uploads:
  nginx-logs:

This setup provides persistent data for PostgreSQL and Redis, shared upload storage between the app and Nginx, bind-mounted configuration, and automated daily backups with 7-day retention. For monitoring this entire stack, see our guides on Uptime Kuma and Prometheus and Grafana.

Volumes are the foundation of persistent Docker infrastructure. Master them, and your containers become truly reliable — data survives restarts, updates, migrations, and hardware failures. Neglect them, and you're one docker rm away from disaster.