Every application has secrets — database passwords, API keys, encryption tokens, TLS certificates. In the early days of a project, developers stuff these into .env files, pass them through environment variables, or worse, hardcode them directly into source code. This works when you have one server and one developer. It falls apart the moment you scale to multiple services, multiple environments, and multiple team members who all need different levels of access to different credentials. The question stops being "where do I put my database password?" and becomes "how do I manage hundreds of secrets across dozens of services with auditability, rotation, and access control?" That question has a definitive answer: HashiCorp Vault.
Vault is a secrets management platform that centralizes, encrypts, and controls access to sensitive data. It provides a unified API for storing and retrieving secrets, dynamic credential generation, encryption as a service, and detailed audit logging of every access event. Running Vault on an Ubuntu VPS gives you full control over your secrets infrastructure without relying on third-party SaaS platforms that may not meet your compliance requirements or budget constraints.
This guide walks through a complete Vault deployment on Ubuntu — from installation and configuration through practical integration patterns that you can use in production today.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Centralized Secrets Management Matters
The .env file approach has a deceptive simplicity. You create a file, add your variables, load them at startup, and add .env to your .gitignore. Problem solved — until it isn't.
Consider what happens at scale. You have a staging environment, a production environment, and a development environment. Each has its own database credentials, API keys, and service tokens. You deploy across four servers with three microservices each. That's twelve .env files you need to keep synchronized. When the database password rotates (which it should, regularly), you need to update it in twelve places, then restart twelve services. Miss one, and you have an outage. Forget which file has the old credential, and you're grepping through SSH sessions at 2 AM.
The problems compound further. Who accessed the production database credentials last Tuesday? With .env files, you have no idea. A developer leaves the team — which secrets did they have access to? You'd need to rotate everything because there's no access control layer. A secret leaks in a log file — when did it happen and what was exposed? No audit trail exists to answer that question.
Centralized secrets management solves all of these problems by providing a single source of truth, granular access control, automatic rotation, complete audit logging, and encryption at rest and in transit.
Vault vs Infisical vs Doppler
The secrets management space has several contenders. Here's how they compare for self-hosted deployments on a VPS.
HashiCorp Vault is the industry standard. It's open source (BSL licensed since 2023), supports dynamic secret generation, has dozens of auth methods and secrets engines, and can scale from a single node to a multi-datacenter cluster. The tradeoff is complexity — Vault has a significant learning curve and operational overhead. It's the right choice when you need dynamic credentials, fine-grained policies, or compliance-grade audit logging.
Infisical is an open-source alternative focused on developer experience. It has a cleaner UI, native integrations with CI/CD platforms, and simpler setup. However, it lacks Vault's dynamic secret generation, has fewer auth backends, and its self-hosted version requires PostgreSQL and Redis as dependencies. Good for teams that primarily need static secret storage and injection.
Doppler is SaaS-only with no self-hosted option. It excels at environment variable synchronization and has excellent CI/CD integrations, but you're trusting a third party with every secret in your infrastructure. For organizations with data sovereignty requirements, this is a non-starter.
For a self-hosted deployment on a VPS where you need maximum flexibility and control, Vault remains the strongest choice. The rest of this guide focuses exclusively on Vault.
Prerequisites
You need an Ubuntu VPS with at least 1 vCPU, 512 MB RAM, and 10 GB of storage. Vault's file storage backend uses under 256 MB of RAM for typical workloads, making it well-suited to a MassiveGRID VPS even at entry-level configurations. You should have root or sudo access, a domain name pointed at your server (for TLS), and basic familiarity with the Linux command line.
Update your system before proceeding:
sudo apt update && sudo apt upgrade -y
Installing Vault
You have two installation paths: the official binary or Docker. Both are production-viable.
Option A: Official Binary
HashiCorp provides an APT repository for Ubuntu. Add it and install Vault:
sudo apt install -y gpg wget
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install -y vault
Verify the installation:
vault --version
The APT package creates a vault system user, sets up the configuration directory at /etc/vault.d/, and installs a systemd service unit.
Option B: Docker
If you prefer containerized deployments, pull the official image:
docker pull hashicorp/vault:latest
Create a directory structure for persistent data:
mkdir -p /opt/vault/{config,data,logs}
chown -R 100:1000 /opt/vault
We'll create the Docker Compose file after writing the configuration in the next section.
Configuring Vault with File Storage Backend
Many Vault guides assume you'll run Consul as a storage backend. For a single-server deployment on a VPS, this adds unnecessary complexity. The integrated file storage backend is simpler, performant, and perfectly adequate for small to medium workloads.
Create (or edit) the Vault configuration file:
sudo nano /etc/vault.d/vault.hcl
Add the following configuration:
ui = true
storage "file" {
path = "/opt/vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault.d/tls/vault-cert.pem"
tls_key_file = "/etc/vault.d/tls/vault-key.pem"
}
api_addr = "https://vault.yourdomain.com:8200"
disable_mlock = false
log_level = "info"
Key configuration choices here: ui = true enables the web interface at https://vault.yourdomain.com:8200/ui. The file storage backend writes encrypted data to /opt/vault/data. TLS is mandatory — never run Vault without encryption in transit. Set disable_mlock = false to prevent secrets from being swapped to disk (the default and recommended setting).
For TLS certificates, use Let's Encrypt with Certbot:
sudo apt install -y certbot
sudo certbot certonly --standalone -d vault.yourdomain.com
sudo mkdir -p /etc/vault.d/tls
sudo cp /etc/letsencrypt/live/vault.yourdomain.com/fullchain.pem /etc/vault.d/tls/vault-cert.pem
sudo cp /etc/letsencrypt/live/vault.yourdomain.com/privkey.pem /etc/vault.d/tls/vault-key.pem
sudo chown vault:vault /etc/vault.d/tls/*
If you chose the Docker path, create a docker-compose.yml:
version: "3.8"
services:
vault:
image: hashicorp/vault:latest
container_name: vault
restart: unless-stopped
cap_add:
- IPC_LOCK
ports:
- "8200:8200"
volumes:
- /opt/vault/config:/vault/config
- /opt/vault/data:/vault/data
- /opt/vault/logs:/vault/logs
- /etc/vault.d/tls:/vault/tls:ro
environment:
VAULT_LOCAL_CONFIG: |
ui = true
storage "file" {
path = "/vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/vault/tls/vault-cert.pem"
tls_key_file = "/vault/tls/vault-key.pem"
}
api_addr = "https://vault.yourdomain.com:8200"
command: vault server -config=/vault/config
Start Vault with docker compose up -d or sudo systemctl start vault depending on your installation method.
Initializing and Unsealing Vault
This is the most critical step in your entire Vault deployment. When Vault starts for the first time, it's in an uninitialized state. Initialization generates the master encryption key and splits it into unseal keys using Shamir's Secret Sharing algorithm.
Set the Vault address in your environment:
export VAULT_ADDR="https://vault.yourdomain.com:8200"
Initialize Vault with five key shares and a threshold of three (meaning any three of the five keys can unseal Vault):
vault operator init -key-shares=5 -key-threshold=3
This command outputs five unseal keys and an initial root token. This output is shown exactly once and never again. Store each unseal key in a separate, secure location. Distribute them to different trusted individuals. If you lose enough keys that you can't meet the threshold, your Vault data is permanently unrecoverable. There is no backdoor, no recovery mechanism, no support ticket that can help.
Store the keys securely — consider using encrypted USB drives, hardware security modules, or a password manager with separate accounts for each key holder. Never store all keys in the same location.
Unseal Vault by providing three of the five keys:
vault operator unseal # Enter key 1
vault operator unseal # Enter key 2
vault operator unseal # Enter key 3
After the third key, Vault transitions to an unsealed state and begins serving requests. Authenticate with the root token:
vault login <initial-root-token>
Important: the root token should be used only for initial setup. Create administrative policies and tokens, then revoke the root token. You can generate a new root token later using the unseal keys if needed.
KV v2 Secrets Engine
Vault's Key-Value (KV) secrets engine is the most commonly used feature. Version 2 adds versioning, which lets you retrieve previous versions of a secret and set automatic deletion policies.
Enable the KV v2 engine:
vault secrets enable -path=secret kv-v2
Store a secret:
vault kv put secret/myapp/database \
username="app_user" \
password="s3cure-p@ssw0rd" \
host="db.internal:5432" \
dbname="production"
Retrieve the secret:
vault kv get secret/myapp/database
Retrieve a specific field:
vault kv get -field=password secret/myapp/database
View all versions of a secret:
vault kv metadata get secret/myapp/database
Roll back to a previous version:
vault kv rollback -version=1 secret/myapp/database
Set a maximum number of versions to retain:
vault kv metadata put -max-versions=10 secret/myapp/database
The KV engine stores all data encrypted at rest. Combined with TLS in transit and access policies (covered next), you have encryption at every layer.
Policies and Access Control
Vault's policy system uses HCL (HashiCorp Configuration Language) to define what actions an authenticated identity can perform on which paths. Policies follow a deny-by-default model — if a policy doesn't explicitly grant access, the access is denied.
Create a policy file for your application:
cat <<EOF > /tmp/myapp-policy.hcl
# Read-only access to myapp secrets
path "secret/data/myapp/*" {
capabilities = ["read", "list"]
}
# No access to other apps' secrets
path "secret/data/otherapp/*" {
capabilities = ["deny"]
}
# Allow token self-renewal
path "auth/token/renew-self" {
capabilities = ["update"]
}
EOF
Write the policy to Vault:
vault policy write myapp /tmp/myapp-policy.hcl
Create an admin policy with broader permissions:
cat <<EOF > /tmp/admin-policy.hcl
# Full access to all KV secrets
path "secret/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage policies
path "sys/policies/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage auth methods
path "auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# View audit logs
path "sys/audit*" {
capabilities = ["read", "list", "sudo"]
}
EOF
vault policy write admin /tmp/admin-policy.hcl
Note the path structure: KV v2 uses secret/data/ for read/write operations and secret/metadata/ for metadata operations. This is a common gotcha — if your policy says secret/myapp/* without the data/ segment, reads will fail.
Auth Methods
Vault supports multiple authentication methods. Here are the three most practical for VPS deployments.
Token Authentication
The simplest method. Generate a token attached to a policy:
vault token create -policy="myapp" -ttl=24h -renewable=true
This returns a token that can read secrets under secret/data/myapp/* for 24 hours. Applications use this token to authenticate. Simple but limited — you need to distribute and manage tokens manually.
AppRole Authentication
Designed for machine-to-machine authentication. AppRole uses a role ID (like a username) and a secret ID (like a password) to obtain a token.
# Enable AppRole
vault auth enable approle
# Create a role
vault write auth/approle/role/myapp \
token_policies="myapp" \
token_ttl=1h \
token_max_ttl=4h \
secret_id_ttl=720h \
secret_id_num_uses=0
# Get the role ID
vault read auth/approle/role/myapp/role-id
# Generate a secret ID
vault write -f auth/approle/role/myapp/secret-id
Applications authenticate by posting both IDs to the login endpoint:
vault write auth/approle/login \
role_id="<role-id>" \
secret_id="<secret-id>"
AppRole is the recommended method for services and automated systems. The role ID can be baked into the deployment, while the secret ID is delivered through a separate, secure channel.
Userpass Authentication
For human users who need CLI or UI access:
vault auth enable userpass
vault write auth/userpass/users/devlead \
password="strong-password-here" \
policies="admin"
vault write auth/userpass/users/developer \
password="another-password" \
policies="myapp"
Users log in with:
vault login -method=userpass username=developer
Integration #1: Docker Compose Credential Injection
A common pattern is fetching secrets from Vault at container startup and injecting them as environment variables. Create a small entrypoint script that retrieves credentials before launching the application.
Create vault-entrypoint.sh:
#!/bin/bash
set -e
# Authenticate via AppRole
VAULT_TOKEN=$(curl -s \
--request POST \
--data "{\"role_id\":\"${VAULT_ROLE_ID}\",\"secret_id\":\"${VAULT_SECRET_ID}\"}" \
${VAULT_ADDR}/v1/auth/approle/login | jq -r '.auth.client_token')
# Fetch secrets
SECRETS=$(curl -s \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/secret/data/myapp/database)
# Export as environment variables
export DB_HOST=$(echo $SECRETS | jq -r '.data.data.host')
export DB_USER=$(echo $SECRETS | jq -r '.data.data.username')
export DB_PASS=$(echo $SECRETS | jq -r '.data.data.password')
export DB_NAME=$(echo $SECRETS | jq -r '.data.data.dbname')
# Execute the main application
exec "$@"
Reference it in your docker-compose.yml:
services:
webapp:
image: myapp:latest
entrypoint: ["/vault-entrypoint.sh"]
command: ["node", "server.js"]
environment:
VAULT_ADDR: "https://vault.yourdomain.com:8200"
VAULT_ROLE_ID: "${VAULT_ROLE_ID}"
VAULT_SECRET_ID: "${VAULT_SECRET_ID}"
volumes:
- ./vault-entrypoint.sh:/vault-entrypoint.sh:ro
The only credentials stored locally are the AppRole IDs, which are scoped to a specific policy and can be rotated independently. The actual database passwords, API keys, and other sensitive values never touch the filesystem.
Integration #2: Application-Level Retrieval
For tighter integration, fetch secrets directly in your application code. This approach gives you control over caching, renewal, and error handling.
Node.js Example
const https = require('https');
class VaultClient {
constructor(addr, roleId, secretId) {
this.addr = addr;
this.roleId = roleId;
this.secretId = secretId;
this.token = null;
}
async authenticate() {
const response = await this.request('POST', '/v1/auth/approle/login', {
role_id: this.roleId,
secret_id: this.secretId
});
this.token = response.auth.client_token;
return this.token;
}
async getSecret(path) {
if (!this.token) await this.authenticate();
const response = await this.request('GET', `/v1/secret/data/${path}`);
return response.data.data;
}
request(method, path, body) {
return new Promise((resolve, reject) => {
const url = new URL(path, this.addr);
const options = {
method,
hostname: url.hostname,
port: url.port,
path: url.pathname,
headers: { 'Content-Type': 'application/json' }
};
if (this.token) options.headers['X-Vault-Token'] = this.token;
const req = https.request(options, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => resolve(JSON.parse(data)));
});
req.on('error', reject);
if (body) req.write(JSON.stringify(body));
req.end();
});
}
}
// Usage
const vault = new VaultClient(
process.env.VAULT_ADDR,
process.env.VAULT_ROLE_ID,
process.env.VAULT_SECRET_ID
);
async function startApp() {
const dbCreds = await vault.getSecret('myapp/database');
console.log(`Connecting to ${dbCreds.host} as ${dbCreds.username}`);
// Initialize database connection with fetched credentials
}
startApp().catch(console.error);
Python Example
import os
import requests
class VaultClient:
def __init__(self, addr, role_id, secret_id):
self.addr = addr
self.role_id = role_id
self.secret_id = secret_id
self.token = None
def authenticate(self):
resp = requests.post(
f"{self.addr}/v1/auth/approle/login",
json={"role_id": self.role_id, "secret_id": self.secret_id}
)
resp.raise_for_status()
self.token = resp.json()["auth"]["client_token"]
return self.token
def get_secret(self, path):
if not self.token:
self.authenticate()
resp = requests.get(
f"{self.addr}/v1/secret/data/{path}",
headers={"X-Vault-Token": self.token}
)
resp.raise_for_status()
return resp.json()["data"]["data"]
# Usage
vault = VaultClient(
addr=os.environ["VAULT_ADDR"],
role_id=os.environ["VAULT_ROLE_ID"],
secret_id=os.environ["VAULT_SECRET_ID"]
)
db_creds = vault.get_secret("myapp/database")
print(f"Connecting to {db_creds['host']} as {db_creds['username']}")
Both examples use the AppRole auth method and retrieve secrets from the KV v2 engine. In production, add retry logic, token renewal handling, and cache secrets with a TTL to avoid hitting Vault on every request. For secret retrieval under load, consistent response times matter — a MassiveGRID VDS with dedicated resources ensures your Vault instance isn't competing with noisy neighbors for CPU cycles during peak authentication traffic.
Integration #3: Dynamic PostgreSQL Credentials
This is where Vault truly shines. Instead of storing static database passwords, Vault can generate unique, short-lived credentials for each application instance. When the credentials expire, they're automatically revoked. A compromised credential is useless within minutes.
Enable the database secrets engine:
vault secrets enable database
Configure the PostgreSQL connection:
vault write database/config/mydb \
plugin_name=postgresql-database-plugin \
allowed_roles="myapp-role" \
connection_url="postgresql://{{username}}:{{password}}@db.internal:5432/production?sslmode=require" \
username="vault_admin" \
password="vault-admin-password"
Create a role that defines the SQL statements for creating and revoking users:
vault write database/roles/myapp-role \
db_name=mydb \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; \
DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
Request dynamic credentials:
vault read database/creds/myapp-role
This returns a unique username and password valid for one hour. Each request generates a new credential pair. Your application requests fresh credentials at startup, and Vault handles revocation when the lease expires.
Update your application's Vault policy to allow dynamic credential generation:
path "database/creds/myapp-role" {
capabilities = ["read"]
}
Dynamic credentials eliminate the need for password rotation procedures, reduce blast radius from credential leaks, and provide per-instance accountability in your database audit logs.
Audit Logging
Vault can log every authenticated request and response, providing a complete audit trail of who accessed what, when, and from where. This is essential for compliance (SOC 2, HIPAA, PCI-DSS) and incident investigation.
Enable file-based audit logging:
vault audit enable file file_path=/opt/vault/logs/audit.log
Each log entry is a JSON object containing the request method, path, source IP, authentication details, and a hash of the data (actual secret values are HMAC-hashed, not logged in plaintext). Sample entry structure:
{
"type": "response",
"auth": {
"token_type": "service",
"policies": ["default", "myapp"],
"metadata": { "role_name": "myapp" }
},
"request": {
"id": "a1b2c3d4-...",
"operation": "read",
"path": "secret/data/myapp/database",
"remote_address": "10.0.1.50"
},
"response": {
"data": {
"data": {
"host": "hmac-sha256:abc123...",
"password": "hmac-sha256:def456..."
}
}
}
}
Set up log rotation to prevent disk exhaustion:
cat <<EOF | sudo tee /etc/logrotate.d/vault
/opt/vault/logs/audit.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
copytruncate
}
EOF
Important: if the audit device fails (disk full, permissions error), Vault stops responding to all requests rather than operating without an audit trail. This is a security feature — monitor your disk space and log rotation closely.
Backup Strategy for Encrypted Data
The file storage backend writes encrypted data to /opt/vault/data. Back up this directory regularly, but understand what you're backing up: the data is encrypted with the master key, which is itself protected by the unseal keys. Without the unseal keys, a backup is useless ciphertext.
Create an automated backup script:
#!/bin/bash
BACKUP_DIR="/opt/vault/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "${BACKUP_DIR}"
# Create a snapshot of the data directory
tar czf "${BACKUP_DIR}/vault-data-${TIMESTAMP}.tar.gz" -C /opt/vault data/
# Remove backups older than 30 days
find "${BACKUP_DIR}" -name "vault-data-*.tar.gz" -mtime +30 -delete
echo "Backup completed: vault-data-${TIMESTAMP}.tar.gz"
Schedule it with cron:
crontab -e
# Add: 0 2 * * * /opt/vault/scripts/backup.sh >> /var/log/vault-backup.log 2>&1
Your backup strategy must include three things: the encrypted data directory, the Vault configuration file (not sensitive but necessary for restoration), and the unseal keys (stored separately and securely). Without all three, you cannot restore a Vault instance.
Auto-Unseal Options
Manual unsealing is secure but operationally painful. Every time Vault restarts — server reboot, process crash, upgrade — someone needs to provide unseal keys before it serves requests. For a VPS that might reboot for kernel updates, this creates availability gaps.
Vault supports auto-unseal using external key management services:
AWS KMS: Vault encrypts its master key with an AWS KMS key. On startup, it calls KMS to decrypt the master key automatically. Add to your Vault config:
seal "awskms" {
region = "us-east-1"
kms_key_id = "your-kms-key-id"
}
GCP Cloud KMS: Same concept using Google's key management:
seal "gcpckms" {
project = "your-project"
region = "global"
key_ring = "vault-keyring"
crypto_key = "vault-key"
}
Transit Auto-Unseal: Use another Vault instance's transit secrets engine to auto-unseal. This is useful when you have a highly available "root" Vault cluster that can unseal downstream instances.
Auto-unseal doesn't reduce security — it shifts the trust boundary from "humans with unseal keys" to "an external KMS with access controls." Choose based on your infrastructure and threat model.
For migrating from Shamir's unseal keys to auto-unseal, Vault provides a -migrate flag during the unseal process. Plan a maintenance window for this operation.
Production Hardening Checklist
Before considering your Vault deployment production-ready, verify these configurations:
- TLS everywhere: Never expose Vault over plain HTTP, even on a private network.
- Revoke the root token: Use
vault token revoke <root-token>after creating admin users and policies. - Firewall rules: Restrict port 8200 to known application IPs and administrator networks only.
- Audit logging enabled: At least one audit device must be active at all times.
- Backup automation: Scheduled backups with offsite copies and tested restoration procedures.
- Token TTLs: Set appropriate TTLs on all tokens. Avoid long-lived or non-expiring tokens.
- Secret versioning limits: Configure
max-versionson KV metadata to prevent unbounded storage growth. - Monitoring: Vault exposes Prometheus metrics at
/v1/sys/metrics. Monitor seal status, token count, and storage utilization. - Lease management: Monitor active leases and set reasonable default/max TTLs to avoid credential sprawl.
# Check Vault status at a glance
vault status
# List all active leases
vault list sys/leases/lookup/database/creds/myapp-role
# Check seal status specifically
vault status -format=json | jq '.sealed'
Your Most Critical Service Deserves Managed Infrastructure
Vault quickly becomes the most critical service in your infrastructure. Every application depends on it for credentials. If Vault goes down, nothing can authenticate, connect to databases, or access API keys. Availability isn't a nice-to-have — it's existential.
For security-critical infrastructure like Vault, where an outage cascades to every service in your stack, consider MassiveGRID's fully managed hosting. Managed infrastructure means automatic failover, proactive monitoring, security patching, and 24/7 support from engineers who understand high-availability deployments. You focus on configuring policies and integrations; the platform handles uptime, backups, and disaster recovery.
Secrets management is one of those rare infrastructure components where doing it right pays dividends across every service, every deployment, and every team in your organization. Start with a single Vault instance on a VPS, centralize your most sensitive credentials, and expand from there. The migration from scattered .env files to a centralized, auditable, policy-driven secrets platform is one of the highest-leverage infrastructure improvements you can make.