Docker has transformed how applications are deployed on Ubuntu VPS servers, bringing reproducibility, isolation, and speed to every stage of the development lifecycle. But containers are not inherently secure. A misconfigured Docker environment can expose your host system, leak sensitive data, or give attackers a direct path to root access. The convenience of pulling images and spinning up containers in seconds masks real security risks that demand deliberate attention.
This guide covers the full spectrum of Docker container security on Ubuntu VPS: scanning images for vulnerabilities before they reach production, hardening containers at runtime, locking down the Docker daemon, and building sustainable practices that keep your environment secure over time. Whether you run a single application container or orchestrate dozens of services, these practices apply.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Understanding Docker's Attack Surface
Before hardening anything, you need to understand why Docker requires special security attention. The Docker daemon runs as root. Every container launched through the daemon inherits access to kernel features, and by default, containers share the host's kernel. This architecture means a container escape — where a process breaks out of its namespace isolation — grants root-level access to the host system.
The primary attack surfaces include:
- The Docker daemon socket — Anyone with access to
/var/run/docker.sockeffectively has root on the host. Mounting this socket into a container (a common but dangerous pattern) gives that container full control over Docker, including the ability to launch privileged containers. - Container images — Images pulled from registries may contain known CVEs, embedded malware, outdated packages, or hardcoded credentials. Supply chain attacks targeting popular base images have occurred multiple times.
- Runtime configuration — Default container settings are more permissive than necessary. Containers run as root by default, retain Linux capabilities they do not need, and can write to their own filesystem.
- Network exposure — Docker manipulates iptables directly, which can bypass your UFW firewall rules and expose container ports to the internet without your knowledge.
- Resource exhaustion — Without limits, a single container can consume all host memory, CPU, or process IDs, creating denial-of-service conditions for the entire VPS.
Each of these surfaces requires specific countermeasures. We will address them systematically.
Image Security: Scanning, Selecting, and Verifying
Container security starts before a single container runs. The images you use form the foundation, and a compromised or vulnerable image undermines every runtime protection you apply on top of it.
Scanning Images with Trivy
Trivy is the industry-standard open-source vulnerability scanner for container images. It checks OS packages and application dependencies against multiple vulnerability databases and returns results categorized by severity. Install it on your Ubuntu VPS:
sudo apt-get install -y wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo gpg --dearmor -o /usr/share/keyrings/trivy.gpg
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install -y trivy
Scan any image before deploying it:
# Scan an image for HIGH and CRITICAL vulnerabilities
trivy image --severity HIGH,CRITICAL nginx:1.27
# Scan and fail if critical vulnerabilities are found (useful in CI)
trivy image --exit-code 1 --severity CRITICAL your-app:latest
# Scan a local Dockerfile's resulting image
docker build -t myapp:test .
trivy image myapp:test
Trivy also scans filesystem paths, making it useful for checking application dependencies before they are even containerized:
trivy fs --severity HIGH,CRITICAL /path/to/your/project
Make Trivy scanning a non-negotiable step in your workflow. No image should reach production without passing a scan.
Choosing and Pinning Base Images
Always use official images from Docker Hub or verified publishers. Official images receive regular security updates and follow documented build processes. Beyond that:
- Prefer minimal base images. Use
-slimor-alpinevariants. A smaller image has fewer packages, which means fewer potential vulnerabilities. Thenode:22-slimimage contains a fraction of the attack surface found innode:22. - Pin image versions explicitly. Never use
:latestin production. Instead ofFROM python:3.12, useFROM python:3.12.8-slim-bookworm. Better yet, pin by digest:FROM python@sha256:abc123.... This guarantees reproducibility and prevents supply chain attacks where a tag is overwritten with a compromised image. - Use multi-stage builds. Compile your application in a build stage with full tooling, then copy only the binary or artifacts into a minimal runtime image. This eliminates compilers, package managers, and build tools from the final image.
# Multi-stage build example
FROM golang:1.23-bookworm AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o server .
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
USER nonroot:nonroot
ENTRYPOINT ["/server"]
Image Signing and Verification
Docker Content Trust (DCT) uses digital signatures to verify image integrity. When enabled, Docker only pulls signed images:
export DOCKER_CONTENT_TRUST=1
docker pull nginx:1.27 # Will fail if the image is not signed
For your own images, sign them when pushing to a registry. This creates a chain of trust from build to deployment. We will cover enabling DCT permanently in the daemon configuration section below.
Runtime Security: Locking Down Running Containers
Even with clean images, how you run containers determines your actual security posture. Default Docker settings are designed for developer convenience, not production security. Every container in production should have explicit security constraints.
Run as a Non-Root User
By default, the process inside a container runs as root (UID 0). If an attacker exploits a vulnerability in your application, they have root inside the container, which is one step closer to root on the host. Always specify a non-root user:
# In your Dockerfile
RUN groupadd -r appuser && useradd -r -g appuser -d /home/appuser -s /sbin/nologin appuser
USER appuser
Or enforce it at runtime:
docker run --user 1000:1000 your-image:tag
Some official images already include a non-root user. The Node.js official image, for example, includes a node user. Check the image documentation and use it.
Read-Only Root Filesystem
Most applications do not need to write to the container's filesystem at runtime. Making the root filesystem read-only prevents attackers from writing scripts, downloading tools, or modifying application code inside the container:
docker run --read-only --tmpfs /tmp --tmpfs /run your-image:tag
The --tmpfs flags mount writable temporary filesystems where applications commonly need write access. For specific data directories, use named volumes:
docker run --read-only --tmpfs /tmp -v app-data:/var/lib/app your-image:tag
Drop All Capabilities, Add Only What You Need
Linux capabilities divide root's powers into discrete units. By default, Docker grants containers a set of capabilities that most applications never use. Drop them all and add back only what is required:
docker run --cap-drop ALL --cap-add NET_BIND_SERVICE your-web-app:tag
Common capabilities and when you might need them:
NET_BIND_SERVICE— bind to ports below 1024CHOWN— change file ownership (some init processes need this)SETUID/SETGID— switch user/group (required if the entrypoint drops privileges)DAC_OVERRIDE— bypass file permission checks (avoid if possible)
Start with --cap-drop ALL and add capabilities one at a time only when your application fails without them. This minimal-privilege approach dramatically reduces what an attacker can do inside a compromised container.
Never Use Privileged Mode
The --privileged flag gives a container nearly unrestricted access to the host, including all devices, all capabilities, and the ability to modify the host's kernel parameters. There is almost never a legitimate reason to run a production container in privileged mode:
# NEVER do this in production
docker run --privileged some-image:tag
# If you need specific device access, use --device instead
docker run --device=/dev/snd some-audio-app:tag
Seccomp and AppArmor Profiles
Docker applies a default seccomp profile that blocks approximately 44 dangerous syscalls (including mount, reboot, and kexec_load). Make sure you are not disabling it:
# NEVER do this — it disables seccomp entirely
docker run --security-opt seccomp=unconfined some-image:tag
For tighter security, create a custom seccomp profile that only allows the specific syscalls your application needs. Docker's documentation provides guidance on generating these profiles.
AppArmor provides mandatory access control on Ubuntu. Docker loads a default AppArmor profile (docker-default) that restricts mount operations, write access to sensitive proc/sys paths, and more. Verify it is active:
docker inspect --format='{{.HostConfig.SecurityOpt}}' container_name
On an Ubuntu VPS running on MassiveGRID's Proxmox-based infrastructure, your containers benefit from an additional isolation boundary: the VPS itself runs inside a hardware-virtualized environment with dedicated kernel space, so a container escape is still contained within the VM.
Prevent Privilege Escalation
Even with a non-root user, processes inside a container can potentially escalate privileges through setuid binaries. Block this explicitly:
docker run --security-opt no-new-privileges:true your-image:tag
This flag prevents any process in the container from gaining additional privileges through setuid/setgid binaries or file capabilities. It should be applied to every production container.
Network Security: Docker and Your Firewall
Docker's networking model creates one of the most common and dangerous misconfigurations on Ubuntu VPS servers. Understanding and fixing it is essential.
The Docker and UFW Bypass Problem
When you publish a port with -p 8080:80, Docker inserts iptables rules in the DOCKER chain that bypass UFW entirely. You may have UFW configured to deny all incoming traffic, yet your container port is wide open to the internet. This is not a bug — it is how Docker's networking operates, and it catches many administrators off guard.
The fix involves configuring Docker to bind only to localhost and using UFW or a reverse proxy for external access. For a complete walkthrough of advanced UFW rules that work correctly with Docker, see our guide on UFW firewall advanced rules for Ubuntu VPS.
The quick approach:
# Bind container ports to localhost only
docker run -p 127.0.0.1:8080:80 your-app:tag
# Then use a reverse proxy (Nginx, Caddy) to handle external traffic
# The reverse proxy runs on the host or in its own container with proper port binding
Alternatively, disable Docker's iptables manipulation and manage rules manually:
# In /etc/docker/daemon.json
{
"iptables": false
}
Warning: disabling Docker's iptables management means you must manually configure all container networking rules. This approach requires thorough understanding of iptables and is recommended only for experienced administrators.
Use Internal Networks
Containers that communicate with each other but do not need external access should be on an internal Docker network:
# Create an internal network (no outbound internet access)
docker network create --internal backend-net
# Attach containers that only need to talk to each other
docker run --network backend-net --name database postgres:16
docker run --network backend-net --name cache redis:7-alpine
Use a multi-network approach for containers that need both internal communication and limited external access:
docker network create frontend-net
docker network create --internal backend-net
# Web app connects to both networks
docker run --network frontend-net --name webapp your-app:tag
docker network connect backend-net webapp
# Database only on internal network
docker run --network backend-net --name db postgres:16
Limit Published Ports
Only publish ports that genuinely need external access. A common mistake is publishing database ports:
# WRONG — exposes PostgreSQL to the internet
docker run -p 5432:5432 postgres:16
# RIGHT — no published port, accessible only from linked containers
docker run --network backend-net postgres:16
Audit your running containers regularly to check for unnecessary port exposure:
docker ps --format "table {{.Names}}\t{{.Ports}}"
Resource Limits: Preventing Denial of Service
Without resource constraints, a single misbehaving container can starve the entire VPS of memory, CPU, or process IDs. On a MassiveGRID VDS with dedicated resources, you have precise resource allocation — but even dedicated resources need protection against runaway containers.
Memory Limits
# Hard memory limit — container is killed if it exceeds this
docker run --memory 512m your-app:tag
# Memory plus swap limit
docker run --memory 512m --memory-swap 768m your-app:tag
# Memory reservation (soft limit) — applied during contention
docker run --memory 512m --memory-reservation 256m your-app:tag
Always set memory limits. An application with a memory leak will eventually consume all available RAM and trigger the Linux OOM killer, which may terminate critical processes including other containers or system services.
CPU Limits
# Limit to 1.5 CPU cores
docker run --cpus 1.5 your-app:tag
# Relative CPU weight (default 1024, lower = less priority)
docker run --cpu-shares 512 your-app:tag
# Pin to specific CPU cores (useful for NUMA-aware workloads)
docker run --cpuset-cpus "0,1" your-app:tag
PID Limits
A fork bomb inside a container can create thousands of processes and freeze the host. Limit the number of processes a container can spawn:
docker run --pids-limit 100 your-app:tag
For most web applications, a PID limit of 100-200 is generous. Adjust based on your application's actual process model.
Docker Compose Resource Configuration
In Docker Compose, specify limits under the deploy key:
services:
webapp:
image: your-app:tag
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
pids: 100
reservations:
cpus: '0.25'
memory: 128M
Note: outside of Swarm mode, you need docker compose up (Compose V2) for deploy limits to take effect, or use the equivalent runtime flags directly.
Daemon Hardening: Securing the Docker Engine
The Docker daemon itself requires configuration to operate securely. Create or edit /etc/docker/daemon.json:
{
"icc": false,
"no-new-privileges": true,
"userns-remap": "default",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"live-restore": true,
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 65536,
"Soft": 32768
}
}
}
Key settings explained:
icc: false— Disables inter-container communication on the default bridge network. Containers must be explicitly linked or use user-defined networks to communicate. This prevents a compromised container from scanning and attacking other containers.no-new-privileges: true— Applies the no-new-privileges flag to all containers by default, so you do not need to specify it on everydocker runcommand.userns-remap: "default"— Enables user namespace remapping, which maps the root user inside the container to a non-root user on the host. This significantly reduces the impact of container escapes.log-driverandlog-opts— Limits log file sizes to prevent disk exhaustion from verbose containers.live-restore: true— Keeps containers running if the Docker daemon restarts, improving availability.
After modifying daemon.json, restart Docker:
sudo systemctl restart docker
Protect the Docker Socket
The Docker socket (/var/run/docker.sock) is the gateway to full host control. Protect it:
# Verify socket permissions
ls -la /var/run/docker.sock
# Should be: srw-rw---- root docker
# Only add trusted users to the docker group
sudo usermod -aG docker trusted-user
# NEVER mount the socket into containers unless absolutely necessary
# If you must (e.g., for CI runners), use read-only access and a socket proxy
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro socket-proxy:tag
Consider using a Docker socket proxy like Tecnativa's docker-socket-proxy for services that need limited Docker API access. It allows you to whitelist specific API endpoints while blocking dangerous operations.
Enable Docker Content Trust Permanently
Add DCT to your shell environment so it applies to all Docker operations:
# Add to /etc/environment or your shell profile
DOCKER_CONTENT_TRUST=1
With this enabled, docker pull and docker push operations will require signed images by default.
Ongoing Security: Monitoring and Updates
Security is not a one-time configuration. Container environments require continuous monitoring, regular scanning, and disciplined update practices.
Automated Image Updates with Watchtower
Watchtower monitors running containers and automatically updates them when new image versions are available:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--schedule "0 0 4 * * *" \
--cleanup
Important caveats with Watchtower:
- Watchtower pulls the latest tag by default. If you pin image versions (as recommended above), Watchtower will not find updates unless the tag itself is updated. This creates a tension between version pinning for security and automatic updates for patching.
- Automatic updates can break applications. Use Watchtower selectively — perhaps only for infrastructure containers (reverse proxy, monitoring agents) that track stable release channels. For application containers, prefer a deliberate CI/CD pipeline with testing.
- Watchtower requires socket access. This is a necessary trade-off. Limit it by using the
--label-enableflag and only labeling specific containers for automatic updates.
# Only update containers with the specific label
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--label-enable \
--schedule "0 0 4 * * *" \
--cleanup
# Label containers that should auto-update
docker run -d --label com.centurylinklabs.watchtower.enable=true nginx:1.27
Scanning Running Containers
Trivy can scan running containers, not just images. Schedule regular scans to catch vulnerabilities discovered after deployment:
# Scan a running container
trivy image $(docker inspect --format='{{.Image}}' container_name)
# Scan all running containers
for img in $(docker ps --format '{{.Image}}' | sort -u); do
echo "=== Scanning $img ==="
trivy image --severity HIGH,CRITICAL "$img"
done
Set up a cron job to run weekly scans and email the results, or integrate with your monitoring stack. For centralized log analysis of container events and scan results, our guide on building a Loki and Grafana log pipeline on Ubuntu VPS covers how to aggregate and visualize logs from all your Docker containers.
Docker Bench Security
Docker Bench for Security is an official script that checks your Docker installation against CIS benchmarks:
docker run --rm --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /etc:/etc:ro \
-v /usr/bin/containerd:/usr/bin/containerd:ro \
-v /usr/bin/runc:/usr/bin/runc:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
docker/docker-bench-security
Run this after making configuration changes to verify your hardening is effective. Address any WARN findings that are relevant to your environment.
Monitoring Container Activity
Monitor runtime behavior to detect anomalies:
# Watch real-time Docker events
docker events --filter type=container
# Check resource usage across all containers
docker stats --no-stream
# Inspect a container's processes
docker top container_name
For production environments, consider deploying Falco, an open-source runtime security tool that uses eBPF to detect anomalous syscalls, unexpected process executions, and suspicious file access patterns inside containers.
Docker Container Security Checklist
Use this checklist to audit your Docker security posture on every Ubuntu VPS deployment:
- Images: All images scanned with Trivy before deployment — no critical CVEs
- Images: Using official or verified base images only
- Images: Versions pinned by tag or digest — no
:latestin production - Images: Multi-stage builds used to minimize final image size
- Runtime: All containers run as non-root user (USER directive or --user flag)
- Runtime: Read-only root filesystem enabled where possible
- Runtime:
--cap-drop ALLapplied, only needed capabilities added back - Runtime: No containers running in
--privilegedmode - Runtime:
--security-opt no-new-privileges:trueapplied globally - Runtime: Default seccomp and AppArmor profiles active (not disabled)
- Network: Container ports bound to 127.0.0.1 unless external access is required
- Network: Internal Docker networks used for backend services
- Network: UFW/Docker interaction addressed (iptables bypass mitigated)
- Network: No database ports published to the internet
- Resources: Memory limits set on all containers
- Resources: CPU limits configured for production containers
- Resources: PID limits set to prevent fork bombs
- Daemon:
icc: falsein daemon.json - Daemon: User namespace remapping enabled
- Daemon: Log rotation configured to prevent disk exhaustion
- Daemon: Docker socket not mounted into containers unnecessarily
- Ongoing: Regular Trivy scans scheduled for running containers
- Ongoing: Docker Bench for Security run after configuration changes
- Ongoing: Container logs aggregated and monitored
Prefer Managed Container Security?
Securing Docker containers is an ongoing commitment. From scanning images and hardening runtime configurations to monitoring daemon activity and managing firewall interactions, every layer requires attention and expertise. If you would rather focus on building your applications than maintaining container security infrastructure, MassiveGRID offers options at every level.
A self-managed VPS gives you full control on a Proxmox-isolated platform where the hypervisor provides a hardware-level isolation boundary between your containers and other tenants — an additional security layer that pure container isolation cannot match. For workloads requiring guaranteed performance, a Dedicated VPS provides precise resource limits on dedicated CPU and RAM, ensuring your container resource constraints map directly to physical hardware without noisy-neighbor interference.
For teams that want container security managed by professionals, MassiveGRID's fully managed hosting includes ongoing container security management — image scanning, runtime hardening, firewall configuration, log monitoring, and incident response. Your containers run on hardened infrastructure with a 100% uptime SLA, and a team rated 9.5/10 by customers handles the security operations so you do not have to.