Every time you upload a file to AWS S3, you pay for the storage. When you download it, you pay egress fees. When your application makes API calls, you pay per request. For small projects, these costs are trivial. But once you're storing terabytes of backups, serving media assets, or running data pipelines, S3 costs compound into hundreds or thousands of dollars monthly — with unpredictable spikes from egress charges that are notoriously difficult to forecast. MinIO is an S3-compatible object storage server that runs on your own infrastructure, giving you the same API, the same client libraries, and the same ecosystem compatibility — with zero egress fees and predictable costs.
Self-hosting your object storage makes sense when you need data sovereignty (your objects stay on infrastructure you control), cost predictability (flat monthly VPS cost regardless of transfer), or integration testing (a local S3-compatible endpoint for development). This guide covers deploying MinIO on Ubuntu 24.04, configuring it for production use, and integrating it with real applications.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
MinIO vs AWS S3 — When Self-Hosting Makes Sense
MinIO implements the S3 API specification, which means any application or tool that works with AWS S3 works with MinIO. The AWS CLI, Boto3, the JavaScript S3 SDK, rclone, Terraform — they all work with MinIO by changing a single endpoint URL.
| Consideration | AWS S3 | Self-Hosted MinIO |
|---|---|---|
| Storage cost | $0.023/GB/month (Standard) | VPS storage cost (included in plan) |
| Egress fees | $0.09/GB after first 100GB | None (included in VPS transfer) |
| API request cost | $0.005 per 1,000 PUT, $0.0004 per 1,000 GET | None |
| Data location | AWS regions (you choose region, AWS controls infra) | Your server, your datacenter, your control |
| Durability | 99.999999999% (11 nines) | Depends on your storage setup (RAID, backups) |
| Availability | 99.99% | Depends on your infrastructure |
| Maintenance | None (fully managed) | You manage updates, backups, monitoring |
| Ecosystem | Deep AWS integration (Lambda, CloudFront, etc.) | S3 API compatible (most tools work) |
| Compliance | SOC2, HIPAA, GDPR (AWS manages) | You control the full compliance story |
Self-hosting MinIO makes sense when:
- Egress costs are significant — serving files frequently, CDN origin pulls, or data transfer between services
- Data sovereignty is required — regulations require data stays on specific infrastructure or in specific jurisdictions
- Cost predictability matters — a flat VPS cost is easier to budget than variable S3 billing
- Development and testing — a local S3 endpoint eliminates AWS costs during development
- Backup storage — storing server backups in S3-compatible storage without paying per-GB
Stick with AWS S3 when:
- You need 11-nines durability without managing replication yourself
- You need tight integration with other AWS services (Lambda triggers, CloudFront, etc.)
- Storage volumes are small and egress is minimal (S3 is cheaper at small scale)
- You don't want to manage storage infrastructure at all
Prerequisites
For this guide, you need:
- An Ubuntu 24.04 VPS — see our Ubuntu VPS setup guide
- Docker and Docker Compose installed — follow our Docker installation guide
- A domain or subdomain for the MinIO API (e.g.,
s3.example.com) - A domain or subdomain for the MinIO Console (e.g.,
minio.example.com) - Nginx installed — see our Nginx reverse proxy guide
MinIO has two network endpoints: the S3 API (port 9000) used by applications and tools, and the web console (port 9001) used for browser-based administration. Using separate subdomains for each is the cleanest approach.
Resource requirements: MinIO itself is lightweight — a Cloud VPS with 2 vCPU / 2GB RAM is plenty for the MinIO process. Storage is the variable. Scale storage independently as your object store grows — MassiveGRID lets you add NVMe storage without changing CPU or RAM.
Docker Compose Setup
Create a directory for the MinIO deployment:
sudo mkdir -p /opt/minio
sudo nano /opt/minio/docker-compose.yml
Add the following configuration:
services:
minio:
image: minio/minio:latest
container_name: minio
restart: always
command: server /data --console-address ":9001"
ports:
- "127.0.0.1:9000:9000"
- "127.0.0.1:9001:9001"
volumes:
- minio_data:/data
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
MINIO_BROWSER_REDIRECT_URL: https://minio.example.com
MINIO_SERVER_URL: https://s3.example.com
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
minio_data:
driver: local
Create an environment file to store credentials securely:
sudo nano /opt/minio/.env
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=your-strong-password-minimum-8-chars
Set restrictive permissions on the environment file:
sudo chmod 600 /opt/minio/.env
Key configuration details:
server /data— tells MinIO to use/dataas the storage directory--console-address ":9001"— explicitly sets the console port (otherwise it's random)127.0.0.1:9000and127.0.0.1:9001— bind to localhost only (Nginx handles external access)MINIO_BROWSER_REDIRECT_URL— tells MinIO the public URL of the console (for OAuth and redirects)MINIO_SERVER_URL— tells MinIO the public URL of the API endpoint
Deploy MinIO:
cd /opt/minio
sudo docker compose up -d
Verify it's running:
sudo docker compose logs -f minio
You should see output indicating the API and Console are listening. Press Ctrl+C to exit.
Test the API endpoint locally:
curl -s http://127.0.0.1:9000/minio/health/live
A 200 response confirms the S3 API is operational.
Nginx Reverse Proxy with SSL
MinIO requires two Nginx server blocks — one for the S3 API and one for the web console. Both need SSL and specific proxy settings for MinIO to function correctly.
S3 API Proxy
Create the API configuration:
sudo nano /etc/nginx/sites-available/s3.example.com
upstream minio_api {
server 127.0.0.1:9000;
keepalive 64;
}
server {
listen 80;
server_name s3.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name s3.example.com;
ssl_certificate /etc/letsencrypt/live/s3.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/s3.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# Allow large file uploads (adjust based on your needs)
client_max_body_size 0;
# Disable buffering for streaming uploads
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_pass http://minio_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# WebSocket support (for S3 Select and streaming)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts for large file operations
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
# Chunked transfer encoding support
chunked_transfer_encoding on;
}
}
Important settings explained:
client_max_body_size 0— disables the upload size limit entirely. MinIO handles its own upload limits. Without this, Nginx rejects uploads over 1MB.proxy_buffering offandproxy_request_buffering off— prevents Nginx from buffering uploads to disk. This is critical for large file uploads — without these settings, Nginx writes the entire upload to a temp file before forwarding to MinIO, doubling disk I/O and memory usage.chunked_transfer_encoding on— enables streaming uploads, which S3 multipart uploads rely on.
Console Proxy
Create the console configuration:
sudo nano /etc/nginx/sites-available/minio.example.com
upstream minio_console {
server 127.0.0.1:9001;
keepalive 64;
}
server {
listen 80;
server_name minio.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name minio.example.com;
ssl_certificate /etc/letsencrypt/live/minio.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/minio.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
client_max_body_size 100M;
location / {
proxy_pass http://minio_console;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (console uses WebSocket for real-time updates)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 300;
}
}
Enable both sites and obtain SSL certificates (see our Let's Encrypt guide):
sudo ln -s /etc/nginx/sites-available/s3.example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/minio.example.com /etc/nginx/sites-enabled/
# Get certificates (run separately for each domain)
sudo certbot --nginx -d s3.example.com
sudo certbot --nginx -d minio.example.com
sudo nginx -t
sudo systemctl reload nginx
Now access:
- MinIO Console:
https://minio.example.com - S3 API endpoint:
https://s3.example.com
Creating Buckets, Users, and Access Policies
Log into the MinIO Console at https://minio.example.com using the root credentials from your .env file.
Creating Buckets
Navigate to Buckets → Create Bucket:
- Enter a bucket name (lowercase, no spaces — e.g.,
app-uploads,backups,media) - Optionally enable versioning (keeps previous versions of overwritten objects)
- Optionally configure object locking (immutable objects for compliance)
- Click "Create Bucket"
Common bucket structure for a web application:
app-uploads/ # User-uploaded files
media/ # Images, videos, audio
backups/ # Server and database backups
static-assets/ # CSS, JS, fonts (rarely changed)
logs/ # Archived log files
Creating Service Accounts (Access Keys)
Never use root credentials in your applications. Create dedicated access keys with limited permissions.
Navigate to Access Keys → Create Access Key:
- MinIO generates a random Access Key and Secret Key
- Optionally set an expiration date
- Optionally attach a policy to restrict what this key can access
- Click "Create"
- Save both keys immediately — the secret key is shown only once
Creating Access Policies
MinIO uses the same policy language as AWS IAM. Navigate to Policies → Create Policy.
Example — read/write access to a single bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::app-uploads",
"arn:aws:s3:::app-uploads/*"
]
}
]
}
Example — read-only access (for a CDN or public access):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::media/*"
]
}
]
}
Example — backup-only access (write but no delete — for security):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::backups",
"arn:aws:s3:::backups/*"
]
}
]
}
Attach policies to access keys during creation or by editing the key afterward.
Using the MinIO Client (mc) for Administration
The MinIO Client (mc) is a command-line tool that provides a familiar Unix-like interface for object storage operations. It's more powerful than the web console for bulk operations and scripting.
Installing mc
curl https://dl.min.io/client/mc/release/linux-amd64/mc \
--create-dirs \
-o /usr/local/bin/mc
sudo chmod +x /usr/local/bin/mc
mc --version
Configuring an Alias
Set up a connection alias to your MinIO server:
mc alias set myminio https://s3.example.com minioadmin your-strong-password
Test the connection:
mc admin info myminio
This displays server version, uptime, storage capacity, and network information.
Common mc Commands
The mc syntax mirrors standard Unix commands:
# List all buckets
mc ls myminio
# List objects in a bucket
mc ls myminio/app-uploads
# Upload a file
mc cp /path/to/file.jpg myminio/media/images/file.jpg
# Upload a directory recursively
mc cp --recursive /var/www/static/ myminio/static-assets/
# Download a file
mc cp myminio/media/images/file.jpg /tmp/file.jpg
# Remove a file
mc rm myminio/media/images/old-file.jpg
# Remove a bucket and all contents (destructive!)
mc rb --force myminio/temp-bucket
# Get bucket disk usage
mc du myminio/backups
# Find objects matching a pattern
mc find myminio/backups --name "*.sql.gz" --older-than 30d
# Mirror a local directory to MinIO (like rsync)
mc mirror /var/www/uploads/ myminio/app-uploads/
# Watch for real-time changes
mc watch myminio/app-uploads
Batch Operations with mc
Delete old backup files:
# Find and delete backups older than 90 days
mc find myminio/backups --name "*.tar.gz" --older-than 90d --exec "mc rm {}"
Sync a local directory to MinIO (upload only changed files):
mc mirror --overwrite --remove /var/www/assets/ myminio/static-assets/
Integrating with Applications
Because MinIO speaks the S3 API, integrating it into applications is identical to using AWS S3 — just change the endpoint URL.
Node.js Integration (AWS SDK v3)
Install the AWS S3 SDK:
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Create a MinIO client and perform operations:
import { S3Client, PutObjectCommand, GetObjectCommand, ListObjectsV2Command } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { readFileSync } from 'fs';
// Initialize the S3 client pointing to MinIO
const s3Client = new S3Client({
endpoint: 'https://s3.example.com',
region: 'us-east-1', // MinIO ignores this but the SDK requires it
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key',
},
forcePathStyle: true, // Required for MinIO
});
// Upload a file
async function uploadFile(bucketName, objectKey, filePath) {
const fileContent = readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: objectKey,
Body: fileContent,
ContentType: 'image/jpeg', // Set appropriate MIME type
});
const response = await s3Client.send(command);
console.log('Upload successful:', response.ETag);
return response;
}
// Generate a presigned URL (for temporary access to private objects)
async function getPresignedUrl(bucketName, objectKey, expiresIn = 3600) {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: objectKey,
});
const url = await getSignedUrl(s3Client, command, { expiresIn });
console.log('Presigned URL:', url);
return url;
}
// List objects in a bucket
async function listObjects(bucketName, prefix = '') {
const command = new ListObjectsV2Command({
Bucket: bucketName,
Prefix: prefix,
});
const response = await s3Client.send(command);
response.Contents?.forEach(obj => {
console.log(`${obj.Key} - ${obj.Size} bytes - ${obj.LastModified}`);
});
return response.Contents;
}
// Usage
await uploadFile('media', 'images/photo.jpg', '/tmp/photo.jpg');
const url = await getPresignedUrl('media', 'images/photo.jpg', 7200); // 2 hours
await listObjects('media', 'images/');
The critical setting is forcePathStyle: true. AWS S3 uses virtual-hosted-style URLs (bucket.s3.amazonaws.com), but MinIO uses path-style URLs (s3.example.com/bucket). Without this setting, the SDK tries to resolve bucket.s3.example.com, which doesn't exist.
Python Integration (Boto3)
Install Boto3:
pip install boto3
Python integration:
import boto3
from botocore.client import Config
from datetime import datetime
# Initialize the S3 client for MinIO
s3_client = boto3.client(
's3',
endpoint_url='https://s3.example.com',
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-key',
config=Config(signature_version='s3v4'),
region_name='us-east-1' # Required but ignored by MinIO
)
# Upload a file
def upload_file(bucket, key, file_path):
s3_client.upload_file(
file_path, bucket, key,
ExtraArgs={'ContentType': 'image/jpeg'}
)
print(f"Uploaded {file_path} to {bucket}/{key}")
# Download a file
def download_file(bucket, key, destination):
s3_client.download_file(bucket, key, destination)
print(f"Downloaded {bucket}/{key} to {destination}")
# Generate a presigned URL
def get_presigned_url(bucket, key, expires_in=3600):
url = s3_client.generate_presigned_url(
'get_object',
Params={'Bucket': bucket, 'Key': key},
ExpiresIn=expires_in
)
return url
# List objects with prefix filtering
def list_objects(bucket, prefix=''):
response = s3_client.list_objects_v2(Bucket=bucket, Prefix=prefix)
for obj in response.get('Contents', []):
print(f" {obj['Key']} - {obj['Size']} bytes - {obj['LastModified']}")
# Upload with metadata
def upload_with_metadata(bucket, key, file_path, metadata):
s3_client.upload_file(
file_path, bucket, key,
ExtraArgs={
'Metadata': metadata,
'ContentType': 'application/pdf'
}
)
# Usage
upload_file('media', 'images/photo.jpg', '/tmp/photo.jpg')
url = get_presigned_url('media', 'images/photo.jpg', 7200)
print(f"Presigned URL: {url}")
list_objects('media', 'images/')
Presigned URLs — Secure Temporary Access
Presigned URLs are one of the most useful S3/MinIO features. They generate a temporary URL that grants access to a specific object for a limited time — without exposing your access keys or making the bucket public.
Common use cases:
- File downloads — generate a 1-hour download link for a user, after verifying their permissions in your application
- Direct uploads — generate a presigned PUT URL so clients upload directly to MinIO without routing through your application server
- Image serving — generate presigned URLs for private images in your application
Presigned PUT URL for direct client uploads (Node.js):
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
async function getUploadUrl(bucketName, objectKey, contentType, expiresIn = 600) {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: objectKey,
ContentType: contentType,
});
const url = await getSignedUrl(s3Client, command, { expiresIn });
return url;
}
// Your API endpoint returns this URL to the client
// Client then PUTs directly to MinIO — no file passes through your server
const uploadUrl = await getUploadUrl('app-uploads', 'user-123/avatar.png', 'image/png');
Using MinIO as a Backup Destination
MinIO is an excellent destination for server backups. Combined with rclone, you get incremental, encrypted backups to S3-compatible storage. See our backup automation guide for comprehensive strategies.
Configuring rclone with MinIO
Install rclone:
sudo apt install rclone
Configure a MinIO remote:
rclone config
Select n for new remote, then:
- Name:
minio - Storage type:
s3(Amazon S3 Compliant) - Provider:
Minio - Access Key: your MinIO access key
- Secret Key: your MinIO secret key
- Endpoint:
https://s3.example.com
Or create the configuration directly:
mkdir -p ~/.config/rclone
nano ~/.config/rclone/rclone.conf
[minio]
type = s3
provider = Minio
access_key_id = your-access-key
secret_access_key = your-secret-key
endpoint = https://s3.example.com
Backup Script Using rclone and MinIO
#!/bin/bash
# Backup script using rclone to MinIO
TIMESTAMP=$(date +%Y%m%d-%H%M)
BACKUP_DIR="/tmp/backup-$TIMESTAMP"
MINIO_BUCKET="minio:backups/server-hostname"
mkdir -p "$BACKUP_DIR"
# Database backup
mysqldump -u root --all-databases --single-transaction | \
gzip > "$BACKUP_DIR/all-databases-$TIMESTAMP.sql.gz"
# Config files
tar czf "$BACKUP_DIR/etc-$TIMESTAMP.tar.gz" \
/etc/nginx/ /etc/letsencrypt/ /opt/*/docker-compose.yml
# Sync to MinIO
rclone copy "$BACKUP_DIR/" "$MINIO_BUCKET/$TIMESTAMP/" \
--progress \
--transfers 4
# Clean up local temp files
rm -rf "$BACKUP_DIR"
# Remove remote backups older than 30 days
rclone delete "$MINIO_BUCKET" \
--min-age 30d
echo "Backup completed: $MINIO_BUCKET/$TIMESTAMP/"
Schedule with cron:
sudo crontab -e
0 3 * * * /opt/scripts/minio-backup.sh >> /var/log/minio-backup.log 2>&1
Bucket Policies for Public Access
If you need to serve files publicly (e.g., a media bucket for a website), set a public-read bucket policy. Using the mc client:
# Make an entire bucket publicly readable
mc anonymous set download myminio/public-assets
Or set a more specific policy via the console:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::public-assets/*"]
}
]
}
Objects in this bucket are now accessible without authentication at https://s3.example.com/public-assets/filename.jpg.
Security warning: Be extremely careful with public bucket policies. Only apply them to buckets that are genuinely intended for public access. Never make a backup bucket or application data bucket publicly accessible.
Lifecycle Rules and Object Expiration
MinIO supports S3 lifecycle rules to automatically delete or transition objects after a specified period. This is essential for log archives, temporary files, and backup rotation.
Using the mc client, set an expiration rule:
# Expire (delete) objects in the logs bucket after 90 days
mc ilm rule add myminio/logs --expiry-days 90
# View lifecycle rules
mc ilm rule ls myminio/logs
# Expire objects with a specific prefix after 30 days
mc ilm rule add myminio/backups --prefix "daily/" --expiry-days 30
This automates storage cleanup without manual intervention — logs and temporary backups are deleted after their retention period.
Monitoring MinIO
MinIO exposes a Prometheus-compatible metrics endpoint:
curl https://s3.example.com/minio/v2/metrics/cluster
Key metrics to monitor:
minio_node_disk_total_bytes— total disk capacityminio_node_disk_used_bytes— current disk usageminio_s3_requests_total— total API requests by typeminio_s3_traffic_received_bytes— inbound data transferminio_s3_traffic_sent_bytes— outbound data transferminio_node_process_resident_memory_bytes— MinIO process memory usage
If you've set up Prometheus and Grafana (see our monitoring guide), add MinIO as a scrape target in your Prometheus configuration:
scrape_configs:
- job_name: 'minio'
metrics_path: /minio/v2/metrics/cluster
scheme: https
static_configs:
- targets: ['s3.example.com']
bearer_token: 'your-minio-metrics-token'
MinIO provides a pre-built Grafana dashboard for visualizing these metrics — search for "MinIO Dashboard" in the Grafana dashboard library.
Storage Growth and Independent Scaling
Object storage grows organically. You start with a few gigabytes, and within months, you're storing hundreds of gigabytes of user uploads, backups, and media files. The key is being able to add storage without over-provisioning CPU and RAM.
Scale storage independently: MinIO itself is lightweight (2 vCPU / 2GB RAM is plenty) — storage is the variable. On a MassiveGRID Cloud VPS, scale NVMe storage independently as your object store grows, without paying for CPU or RAM you don't need.
Monitor your storage usage regularly:
# Check overall MinIO storage usage
mc admin info myminio
# Check individual bucket sizes
mc du myminio/app-uploads
mc du myminio/backups
mc du myminio/media
# Find the largest objects
mc find myminio/app-uploads --larger-than 100MB
Updating MinIO
MinIO releases updates frequently. Update via Docker Compose:
cd /opt/minio
sudo docker compose pull
sudo docker compose up -d
Verify the update:
sudo docker compose logs --tail 20 minio
mc admin info myminio
MinIO handles rolling updates gracefully — your data is untouched in the Docker volume.
Security Best Practices
1. Never Expose MinIO Ports Directly
Always use Nginx as a reverse proxy. The Docker Compose configuration above binds to 127.0.0.1 only, which prevents direct access. Verify with:
sudo ss -tlnp | grep -E '9000|9001'
Both ports should show 127.0.0.1, not 0.0.0.0.
2. Use Access Keys with Minimal Permissions
Create separate access keys for each application or service, each with a policy that grants only the permissions it needs. Never use root credentials in application code.
3. Enable Bucket Versioning for Critical Data
Versioning protects against accidental deletions and overwrites:
mc version enable myminio/app-uploads
With versioning enabled, deleted objects are soft-deleted (recoverable), and overwritten objects retain their previous versions.
4. Encrypt Data at Rest (Optional)
MinIO supports server-side encryption. For environments with compliance requirements, enable it in the MinIO configuration by adding encryption environment variables to your Docker Compose file.
5. Restrict Console Access
The MinIO Console provides full administrative access. Consider adding IP-based restrictions in the Nginx configuration for the console virtual host, similar to what we describe in our security hardening guide.
High I/O Workloads
Object storage operations — especially concurrent uploads and downloads — are I/O intensive. When multiple applications simultaneously read and write objects, the storage subsystem becomes the bottleneck.
Dedicated I/O performance: Frequent concurrent S3 API calls need dedicated I/O. MassiveGRID Dedicated VPS ensures object storage operations don't compete with other tenants for storage bandwidth. Dedicated NVMe resources provide consistent read/write performance even under heavy concurrent access.
Troubleshooting Common Issues
Upload Fails with "413 Request Entity Too Large"
This is Nginx rejecting the upload, not MinIO. Ensure your Nginx API proxy configuration includes:
client_max_body_size 0;
Restart Nginx after changing this.
"Access Denied" When Using AWS SDK
Common causes:
- Missing
forcePathStyle: true(Node.js) or incorrect endpoint format - Wrong credentials — verify the access key and secret key are correct
- Insufficient policy permissions — the access key's policy doesn't include the required actions for the bucket
- Clock skew — S3 signatures are time-sensitive. Ensure your server clock is synchronized:
timedatectl status
Console Login Works but Shows No Buckets
Check that MINIO_SERVER_URL and MINIO_BROWSER_REDIRECT_URL in the Docker Compose environment match your actual domains. Mismatched URLs cause the console to connect to the wrong API endpoint.
Slow Upload Performance
Check Nginx proxy buffering settings. If proxy_buffering is on (the default), Nginx buffers the entire upload before forwarding, which dramatically slows large file uploads. Set proxy_buffering off and proxy_request_buffering off in the API server block.
Data Sovereignty: Your Objects, Your Infrastructure
When you store data in AWS S3, your objects reside on AWS-controlled infrastructure in AWS-managed datacenters. You choose a region, but AWS manages the physical and logical access. For many use cases, this is fine. But for organizations subject to data residency regulations (GDPR, HIPAA, PDPA), or companies that simply want to know exactly where their data lives, self-hosted MinIO provides complete control.
Your objects are stored on your VPS, in the datacenter you chose, on infrastructure you control. No third-party has access to your data. No vendor can change terms of service or pricing structures. No surprise egress charges appear on your bill because a CDN pulled more data than expected.
This level of control comes with responsibility — you manage backups, availability, and security. But for organizations where data sovereignty is a requirement rather than a preference, self-hosted object storage is not optional; it's mandatory.
Summary
MinIO gives you a production-grade, S3-compatible object storage server that you fully control. Here's what we covered:
- Deploy MinIO with Docker Compose — a single container with persistent storage
- Configure Nginx reverse proxy with SSL for both the S3 API and web console
- Create buckets, access keys, and IAM-compatible policies for least-privilege access
- Use the MinIO Client (
mc) for command-line administration and automation - Integrate with applications using standard AWS SDKs (Node.js and Python examples)
- Set up presigned URLs for secure temporary file access
- Use MinIO as a backup destination with rclone
- Configure lifecycle rules for automatic object expiration
MinIO works seamlessly with the rest of your self-hosted infrastructure. If you're running Docker containers managed by Portainer, MinIO provides the object storage layer. If you're running a Ghost blog, MinIO can store image assets. If you're backing up PostgreSQL databases (see our PostgreSQL guide), MinIO is an ideal backup target. The S3 API is the universal interface — once you're running MinIO, every tool that speaks S3 works with your self-hosted storage.