Not every dataset fits neatly into rows and columns. When your data is deeply nested, varies in structure between records, or changes shape as your application evolves, forcing it into a relational schema creates friction — migration scripts, ALTER TABLE statements, ORM gymnastics. MongoDB stores data as flexible JSON-like documents, and for the right workloads, that flexibility translates directly into development speed and operational simplicity.
This guide walks through deploying MongoDB 8.0 on an Ubuntu VPS for production use. We cover everything from installation to authentication, cache tuning for VPS environments, backup strategies, and the monitoring tools you need to keep your database healthy. If you're coming from PostgreSQL and wondering whether MongoDB is the right choice, we'll address that decision too.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
When MongoDB Is the Right Choice
MongoDB excels in specific scenarios where its document model aligns with how your application thinks about data:
- Variable-structure data: Content management systems where articles have different fields, e-commerce catalogs where product attributes vary by category, user profiles with optional and dynamic fields
- Rapid iteration: Early-stage applications where the schema changes frequently — MongoDB doesn't require migrations when you add or remove fields
- Real-time analytics: MongoDB's aggregation pipeline handles complex analytics queries on document collections without the JOIN overhead of relational databases
- IoT and event data: Time-series data with variable payloads, sensor readings with different attributes per device type
- Hierarchical data: Nested objects (comments with replies, organizational structures, product variants) stored naturally without recursive JOINs
MongoDB is not the right choice when your data is heavily relational (many-to-many relationships between entities), when you need complex multi-table transactions frequently, or when your access patterns require ad-hoc JOINs across different entity types.
MongoDB vs PostgreSQL: Decision Framework
If you've already set up PostgreSQL on your Ubuntu VPS, you know how powerful a relational database is. Here's a practical decision framework:
Choose PostgreSQL when:
- Your data has clear relationships between entities (users, orders, products, payments)
- You need ACID transactions spanning multiple tables
- You're building a CRUD application with a well-defined schema
- You need complex reporting with JOINs across many tables
Choose MongoDB when:
- Your documents are self-contained (a blog post with its comments, a product with all its variants)
- Schema changes happen frequently during development
- You need horizontal scaling (sharding) for very large datasets
- Your read patterns align with document boundaries (fetch one document, get everything you need)
Many production applications use both — PostgreSQL for transactional data, MongoDB for content and analytics. The databases serve different purposes and are not interchangeable.
Prerequisites
A Cloud VPS with 2 vCPU and 4GB RAM gives MongoDB's WiredTiger cache 2GB (the 50% default), leaving 2GB for your application. This comfortably handles databases up to 50-100 GB with proper indexing. You need:
- Ubuntu 24.04 LTS with root or sudo access
- At least 2 GB RAM (WiredTiger needs room to work)
- NVMe storage (MongoDB is I/O-intensive — SSD is mandatory for production)
- UFW or another firewall configured
Installing MongoDB 8.0
MongoDB provides official packages for Ubuntu. Do not use the mongodb package from Ubuntu's default repositories — it is outdated and unsupported. Use MongoDB's official APT repository:
# Import MongoDB's GPG key
curl -fsSL https://www.mongodb.org/static/pgp/server-8.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-8.0.gpg --dearmor
# Add the repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg ] https://repo.mongodb.org/apt/ubuntu noble/mongodb-org/8.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list
# Update and install
sudo apt update
sudo apt install -y mongodb-org
Start MongoDB and enable it on boot:
sudo systemctl start mongod
sudo systemctl enable mongod
sudo systemctl status mongod
Verify the installation:
mongosh --eval "db.runCommand({ connectionStatus: 1 })"
You should see a connection status response with "ok" : 1.
Security First: Enabling Authentication
This is the single most critical step in any MongoDB deployment. MongoDB ships with authentication disabled by default. Out of the box, anyone who can connect to port 27017 has full administrative access to every database. Countless MongoDB instances have been compromised because this step was skipped.
First, connect to MongoDB and create an admin user:
mongosh
use admin
db.createUser({
user: "admin",
pwd: passwordPrompt(), // prompts for password securely
roles: [
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" },
{ role: "clusterAdmin", db: "admin" }
]
})
Now enable authentication in the MongoDB configuration:
sudo nano /etc/mongod.conf
Find or add the security section:
security:
authorization: enabled
Restart MongoDB:
sudo systemctl restart mongod
From this point forward, you must authenticate to access MongoDB:
mongosh -u admin -p --authenticationDatabase admin
Creating Application Users with Roles
Never use the admin account for application connections. Create dedicated users with the minimum required permissions for each application database:
mongosh -u admin -p --authenticationDatabase admin
// Create an application database and user
use myapp
db.createUser({
user: "myapp_user",
pwd: passwordPrompt(),
roles: [
{ role: "readWrite", db: "myapp" }
]
})
// Create a read-only user for reporting
db.createUser({
user: "myapp_readonly",
pwd: passwordPrompt(),
roles: [
{ role: "read", db: "myapp" }
]
})
Your application connection string becomes:
mongodb://myapp_user:PASSWORD@127.0.0.1:27017/myapp?authSource=myapp
Binding to Localhost and Configuring Firewall
By default, MongoDB 8.0 binds to 127.0.0.1 — it only accepts connections from the local machine. This is the correct setting for most VPS deployments where your application runs on the same server. Verify this in /etc/mongod.conf:
net:
port: 27017
bindIp: 127.0.0.1
If you need to accept connections from other servers (for example, a separate application server), bind to the private network interface and use firewall rules as described in our security hardening guide and advanced UFW rules guide:
# Only if remote access is needed:
net:
port: 27017
bindIp: 127.0.0.1,10.0.0.5 # private IP only, never public
Lock down with UFW:
# Block MongoDB port from public access
sudo ufw deny 27017
# Allow only from specific application server (if remote access needed)
sudo ufw allow from 10.0.0.10 to any port 27017
Never bind MongoDB to 0.0.0.0 or your public IP address without authentication and firewall rules in place.
WiredTiger Cache Tuning for VPS Environments
MongoDB's WiredTiger storage engine uses an in-memory cache for frequently accessed data. By default, it claims 50% of available RAM minus 1 GB, or 256 MB — whichever is larger. On a 4 GB VPS, that's approximately 1.5 GB for the cache.
The "50% default problem" on a VPS is that MongoDB's cache calculation doesn't account for other services running on the same machine. If your VPS also runs your application server, Nginx, and Redis, the default cache size may be too aggressive.
Tune the cache explicitly in /etc/mongod.conf:
storage:
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1.0 # Explicit 1 GB — adjust based on available RAM
Guidelines for VPS cache sizing:
- 2 GB RAM VPS: Set cache to 0.5 GB (leaves room for OS and application)
- 4 GB RAM VPS: Set cache to 1.0–1.5 GB
- 8 GB RAM VPS: Set cache to 2.0–3.0 GB
- 16 GB RAM VPS: Set cache to 6.0–8.0 GB
If your working set (the portion of data actively queried) exceeds the cache, WiredTiger reads from disk. On MassiveGRID's Ceph NVMe storage, disk reads are fast — but cache hits are always faster. Monitor cache usage to find the right balance:
mongosh -u admin -p --authenticationDatabase admin --eval "
db.serverStatus().wiredTiger.cache
"
Key metrics to watch: bytes currently in the cache vs maximum bytes configured, and pages read into cache vs pages written from cache. High read-into-cache rates indicate your working set exceeds the cache. When WiredTiger needs more RAM, MassiveGRID's independent resource scaling lets you add memory without changing your CPU or storage allocation.
Connection Pool Configuration
Application drivers maintain connection pools to MongoDB. On a VPS, each connection consumes approximately 1 MB of RAM. The default pool size in most drivers (100 connections) may be excessive for a single-server deployment.
Configure your application's connection pool based on your workload:
# Node.js (MongoDB driver)
const client = new MongoClient('mongodb://myapp_user:PASSWORD@127.0.0.1:27017/myapp', {
maxPoolSize: 20, // Max concurrent connections
minPoolSize: 5, // Keep 5 connections warm
maxIdleTimeMS: 30000, // Close idle connections after 30 seconds
waitQueueTimeoutMS: 5000
});
# Python (PyMongo)
client = MongoClient(
'mongodb://myapp_user:PASSWORD@127.0.0.1:27017/myapp',
maxPoolSize=20,
minPoolSize=5,
maxIdleTimeMS=30000
)
Monitor active connections:
mongosh -u admin -p --authenticationDatabase admin --eval "
db.serverStatus().connections
"
If current regularly approaches available, your pool may be too large or connections are leaking. Most VPS workloads function well with 10-30 connections.
Index Management Basics
Indexes are the difference between a 50ms query and a 5-second collection scan. MongoDB creates an index on _id automatically, but every other query pattern needs explicit indexes.
// Connect to your application database
use myapp
// Create a single-field index
db.users.createIndex({ email: 1 }, { unique: true })
// Create a compound index (order matters — most selective field first)
db.orders.createIndex({ userId: 1, createdAt: -1 })
// Create a text index for search
db.articles.createIndex({ title: "text", body: "text" })
// List all indexes on a collection
db.users.getIndexes()
// Analyze a query's execution plan
db.users.find({ email: "user@example.com" }).explain("executionStats")
The explain() output tells you whether a query uses an index (IXSCAN) or performs a full collection scan (COLLSCAN). Any production query returning COLLSCAN on a large collection needs an index.
Index guidelines for VPS environments:
- Index the fields you query and sort by — but not every field. Each index consumes RAM and slows writes.
- Use compound indexes that match your query patterns instead of multiple single-field indexes.
- Monitor index size:
db.collection.stats().indexSizes. If total index size approaches your WiredTiger cache, you need more RAM or fewer indexes.
Backup Strategy with mongodump
MongoDB backups are non-negotiable for production. The mongodump tool creates binary exports that can be restored with mongorestore. Integrate this into your automated backup strategy.
Create a backup script:
#!/bin/bash
# /usr/local/bin/mongodb-backup.sh
BACKUP_DIR="/var/backups/mongodb"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7
mkdir -p "$BACKUP_DIR"
# Dump all databases
mongodump \
--username=admin \
--password="YOUR_ADMIN_PASSWORD" \
--authenticationDatabase=admin \
--out="$BACKUP_DIR/$TIMESTAMP"
# Compress the backup
tar -czf "$BACKUP_DIR/mongodb-$TIMESTAMP.tar.gz" -C "$BACKUP_DIR" "$TIMESTAMP"
rm -rf "$BACKUP_DIR/$TIMESTAMP"
# Remove backups older than retention period
find "$BACKUP_DIR" -name "mongodb-*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: mongodb-$TIMESTAMP.tar.gz"
Make it executable and schedule with cron:
chmod +x /usr/local/bin/mongodb-backup.sh
# Run daily at 3 AM
echo "0 3 * * * root /usr/local/bin/mongodb-backup.sh >> /var/log/mongodb-backup.log 2>&1" | sudo tee /etc/cron.d/mongodb-backup
To restore from a backup:
# Extract the backup
tar -xzf mongodb-20260228_030000.tar.gz
# Restore all databases
mongorestore --username=admin --password="PASSWORD" --authenticationDatabase=admin 20260228_030000/
# Restore a specific database
mongorestore --username=admin --password="PASSWORD" --authenticationDatabase=admin --db=myapp 20260228_030000/myapp/
For larger databases, consider using --oplog for point-in-time recovery with replica sets. For single-server VPS deployments, daily mongodump with a 7-day retention is a solid baseline.
Monitoring with mongostat and mongotop
MongoDB includes two built-in monitoring tools that give you real-time visibility into database performance:
mongostat provides a high-level overview refreshed every second — similar to top for MongoDB:
mongostat --username=admin --password="PASSWORD" --authenticationDatabase=admin
Key columns to watch:
insert/query/update/delete: Operations per second. Spikes indicate unusual activity.res: Resident memory usage. Should stay close to your configured cache size plus overhead.qrw: Queued read/write operations. Non-zero values indicate contention.dirty: Percentage of dirty pages in cache. Consistently above 5% means writes are outpacing flushes.
mongotop shows which collections are consuming the most time:
mongotop --username=admin --password="PASSWORD" --authenticationDatabase=admin 5
This refreshes every 5 seconds and shows read/write time per collection. If one collection dominates read time, it likely needs better indexes or more cache. On shared cloud resources, I/O contention during cache misses can slow reads unpredictably. A dedicated VPS with guaranteed I/O ensures consistent read performance even when WiredTiger needs to fetch from disk.
For continuous monitoring, export metrics to Prometheus with the MongoDB exporter and visualize in Grafana. For quick health checks, a simple script works:
#!/bin/bash
# /usr/local/bin/mongodb-healthcheck.sh
mongosh -u admin -p "PASSWORD" --authenticationDatabase admin --quiet --eval "
const status = db.serverStatus();
print('Connections: ' + status.connections.current + '/' + status.connections.available);
print('Cache: ' + Math.round(status.wiredTiger.cache['bytes currently in the cache'] / 1024 / 1024) + ' MB');
print('Ops/sec: insert=' + status.opcounters.insert + ' query=' + status.opcounters.query);
print('Uptime: ' + Math.round(status.uptime / 3600) + ' hours');
"
Prefer Managed Database Administration?
Running MongoDB in production means staying on top of security patches, managing backup schedules, optimizing indexes as your data grows, monitoring cache hit ratios, and responding to performance degradation at 2 AM. For applications where the database is business-critical, MassiveGRID's fully managed hosting provides dedicated database administration — security updates, automated backups with tested restores, index optimization, performance tuning, and 24/7 monitoring — so your data stays fast, safe, and available without consuming your engineering time.