Pushing code changes directly to production is a recipe for outages, broken features, and sleepless nights. A proper development and staging environment gives you a safe place to write, test, and validate code before it ever reaches your users. While local development environments work for individual coding, a VPS-based staging environment provides a shared testing ground that mirrors your production setup, accessible to your entire team and integrated with your deployment pipeline.

This guide walks through setting up a complete development and staging workflow on a VPS, covering Git-based deployments, Docker containerization, CI/CD automation, environment variable management, database cloning, and Nginx reverse proxy configuration. By the end, you will have a robust pipeline that catches bugs before they become incidents.

Why You Need a Staging Environment

A staging environment is a near-identical copy of your production system used for final testing before deployment. It serves several critical purposes:

Architecture Overview

A single VPS can host both development and staging environments by using Docker containers and Nginx reverse proxy to isolate them. The architecture looks like this:

ComponentDev EnvironmentStaging Environment
URLdev.yourdomain.comstaging.yourdomain.com
Git branchdevelopmain (or release branch)
DatabaseSeparate database instanceClone of production data
ConfigDevelopment environment variablesProduction-like environment variables
Auto-deployOn push to developOn push to main / manual trigger

A VPS with 4 vCPUs, 8 GB RAM, and 80 GB NVMe storage is sufficient for most projects hosting two environments. MassiveGRID's Cloud VPS plans let you scale resources independently as your project grows.

Step 1: Install Docker and Docker Compose

Docker provides the isolation layer that allows multiple environments to coexist on a single VPS without conflicting dependencies, port clashes, or shared state.

# Install Docker on Ubuntu
sudo apt update
sudo apt install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y

# Add your user to the docker group
sudo usermod -aG docker $USER

# Verify installation
docker --version
docker compose version

Step 2: Create the Project Structure

Organize your deployment files in a clear directory structure on the VPS:

# Create directories for each environment
sudo mkdir -p /opt/apps/dev
sudo mkdir -p /opt/apps/staging
sudo mkdir -p /opt/apps/shared/secrets
sudo chown -R $USER:$USER /opt/apps

Step 3: Docker Compose for Each Environment

Create a Docker Compose file that defines your application stack. Here is an example for a Node.js application with a PostgreSQL database and Redis cache:

# /opt/apps/dev/docker-compose.yml
services:
  app:
    build:
      context: ./src
      dockerfile: Dockerfile
    container_name: dev-app
    restart: unless-stopped
    env_file: .env
    ports:
      - "3001:3000"
    depends_on:
      - db
      - redis
    volumes:
      - ./src:/app
      - /app/node_modules

  db:
    image: postgres:16
    container_name: dev-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - dev-pgdata:/var/lib/postgresql/data
    ports:
      - "5433:5432"

  redis:
    image: redis:7-alpine
    container_name: dev-redis
    restart: unless-stopped
    ports:
      - "6380:6379"

volumes:
  dev-pgdata:

Create a similar file for staging at /opt/apps/staging/docker-compose.yml, adjusting the container names, ports (e.g., 3002:3000, 5434:5432), and volume names to avoid conflicts:

# /opt/apps/staging/docker-compose.yml
services:
  app:
    build:
      context: ./src
      dockerfile: Dockerfile
    container_name: staging-app
    restart: unless-stopped
    env_file: .env
    ports:
      - "3002:3000"
    depends_on:
      - db
      - redis

  db:
    image: postgres:16
    container_name: staging-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - staging-pgdata:/var/lib/postgresql/data
    ports:
      - "5434:5432"

  redis:
    image: redis:7-alpine
    container_name: staging-redis
    restart: unless-stopped
    ports:
      - "6381:6379"

volumes:
  staging-pgdata:

Step 4: Environment Variable Management

Each environment needs its own set of configuration values. Create separate .env files for dev and staging, and never commit them to version control.

# /opt/apps/dev/.env
NODE_ENV=development
DB_NAME=myapp_dev
DB_USER=devuser
DB_PASSWORD=dev_secure_password_here
DB_HOST=db
DB_PORT=5432
REDIS_URL=redis://redis:6379
API_URL=https://dev.yourdomain.com/api
LOG_LEVEL=debug
SMTP_HOST=mailhog
SMTP_PORT=1025
# /opt/apps/staging/.env
NODE_ENV=staging
DB_NAME=myapp_staging
DB_USER=staginguser
DB_PASSWORD=staging_secure_password_here
DB_HOST=db
DB_PORT=5432
REDIS_URL=redis://redis:6379
API_URL=https://staging.yourdomain.com/api
LOG_LEVEL=warn
SMTP_HOST=smtp.yourdomain.com
SMTP_PORT=587

Security note: Use different credentials for dev, staging, and production. Never reuse production database passwords in lower environments. Store secrets securely and restrict file permissions: chmod 600 /opt/apps/*/. env

Step 5: Set Up Git-Based Deployments

Automate deployments by pulling code from your Git repository when changes are pushed. Create a deployment script for each environment:

#!/bin/bash
# /opt/apps/dev/deploy.sh

set -e

REPO_URL="git@github.com:yourorg/yourapp.git"
BRANCH="develop"
APP_DIR="/opt/apps/dev/src"

echo "[$(date)] Starting dev deployment..."

# Clone or pull latest code
if [ -d "$APP_DIR/.git" ]; then
    cd "$APP_DIR"
    git fetch origin
    git reset --hard "origin/$BRANCH"
else
    git clone -b "$BRANCH" "$REPO_URL" "$APP_DIR"
fi

# Rebuild and restart containers
cd /opt/apps/dev
docker compose build --no-cache app
docker compose up -d

# Run database migrations
docker compose exec -T app npm run migrate

echo "[$(date)] Dev deployment complete."
chmod +x /opt/apps/dev/deploy.sh
chmod +x /opt/apps/staging/deploy.sh

Step 6: Configure Nginx Reverse Proxy

Nginx acts as the front door, routing requests to the correct Docker container based on the subdomain. Install Nginx and configure virtual hosts:

sudo apt install nginx -y

Create Nginx server blocks for each environment:

# /etc/nginx/sites-available/dev.yourdomain.com
server {
    listen 80;
    server_name dev.yourdomain.com;

    location / {
        proxy_pass http://127.0.0.1:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}

# /etc/nginx/sites-available/staging.yourdomain.com
server {
    listen 80;
    server_name staging.yourdomain.com;

    location / {
        proxy_pass http://127.0.0.1:3002;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}
# Enable the sites
sudo ln -s /etc/nginx/sites-available/dev.yourdomain.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/staging.yourdomain.com /etc/nginx/sites-enabled/

# Test and reload
sudo nginx -t
sudo systemctl reload nginx

# Add SSL with Certbot
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d dev.yourdomain.com -d staging.yourdomain.com

Step 7: Database Cloning for Staging

The staging environment should use a recent copy of production data (with sensitive fields anonymized) to catch data-related bugs. Create a script that clones and sanitizes the production database:

#!/bin/bash
# /opt/apps/staging/clone-db.sh

set -e

PROD_DB_HOST="your-production-db-host"
PROD_DB_NAME="myapp_production"
PROD_DB_USER="readonly_user"
DUMP_FILE="/tmp/prod_dump.sql"

echo "[$(date)] Dumping production database..."
PGPASSWORD="$PROD_DB_PASSWORD" pg_dump \
    -h "$PROD_DB_HOST" \
    -U "$PROD_DB_USER" \
    -d "$PROD_DB_NAME" \
    --no-owner \
    --no-acl \
    -F c \
    -f "$DUMP_FILE"

echo "[$(date)] Restoring to staging..."
docker compose exec -T db pg_restore \
    -U staginguser \
    -d myapp_staging \
    --clean \
    --no-owner \
    "$DUMP_FILE"

echo "[$(date)] Sanitizing sensitive data..."
docker compose exec -T db psql -U staginguser -d myapp_staging -c "
    UPDATE users SET
        email = 'user' || id || '@staging.test',
        password_hash = 'invalidated',
        phone = NULL;
    UPDATE payment_methods SET
        card_number = '****',
        token = NULL;
    DELETE FROM sessions;
"

rm -f "$DUMP_FILE"
echo "[$(date)] Database clone complete."

Schedule this script to run weekly or before major releases using a cron job.

Step 8: CI/CD Pipeline Integration

Connect your Git repository to automatically trigger deployments. Here is an example using GitHub Actions that deploys to your VPS via SSH:

# .github/workflows/deploy-dev.yml
name: Deploy to Dev
on:
  push:
    branches: [develop]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to VPS
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.VPS_HOST }}
          username: ${{ secrets.VPS_USER }}
          key: ${{ secrets.VPS_SSH_KEY }}
          script: /opt/apps/dev/deploy.sh
# .github/workflows/deploy-staging.yml
name: Deploy to Staging
on:
  push:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm test

  deploy:
    needs: test
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to VPS
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.VPS_HOST }}
          username: ${{ secrets.VPS_USER }}
          key: ${{ secrets.VPS_SSH_KEY }}
          script: /opt/apps/staging/deploy.sh

Note that the staging pipeline includes a test step that must pass before deployment proceeds. This ensures only tested code reaches the staging environment.

Step 9: Access Control and Security

Staging environments often contain production-like data. Protect them accordingly:

HTTP Basic Authentication

Add password protection to your staging environment to prevent unauthorized access:

# Create password file
sudo apt install apache2-utils -y
sudo htpasswd -c /etc/nginx/.htpasswd staging_user

# Add to Nginx staging server block
location / {
    auth_basic "Staging Environment";
    auth_basic_user_file /etc/nginx/.htpasswd;
    proxy_pass http://127.0.0.1:3002;
    # ... other proxy settings
}

IP Whitelisting

Alternatively, restrict access to specific IP addresses (your office, VPN, or team members' IPs):

# In the staging Nginx server block
allow 203.0.113.0/24;    # Office IP range
allow 198.51.100.10;      # Developer home IP
deny all;

Firewall Rules

# Ensure only necessary ports are open
sudo ufw allow 22/tcp     # SSH
sudo ufw allow 80/tcp     # HTTP
sudo ufw allow 443/tcp    # HTTPS
sudo ufw enable

MassiveGRID provides DDoS protection and comprehensive security features at the infrastructure level, adding another layer of defense to your staging environment.

Step 10: Monitoring and Logs

Even in non-production environments, monitoring helps you catch issues early:

# View logs for all containers in an environment
cd /opt/apps/staging
docker compose logs -f

# View logs for a specific service
docker compose logs -f app

# Check container resource usage
docker stats

For persistent log management, consider adding a logging stack to your Docker Compose:

# Add to docker-compose.yml for log aggregation
  loki:
    image: grafana/loki:latest
    container_name: staging-loki
    ports:
      - "3100:3100"
    volumes:
      - loki-data:/loki

Best Practices for Dev/Staging Environments

VPS Sizing for Dev/Staging

Team SizeEnvironmentsRecommended VPS
1-3 developersDev + Staging2 vCPU, 4 GB RAM, 50 GB NVMe
4-8 developersDev + Staging + QA4 vCPU, 8 GB RAM, 80 GB NVMe
8+ developersMultiple feature environments8 vCPU, 16 GB RAM, 160 GB NVMe

MassiveGRID's Cloud VPS plans allow you to scale resources on demand, so you can start small and increase capacity as your team and project grow. For teams using container-heavy workflows, the Docker Hosting or PaaS options provide managed container orchestration with built-in CI/CD support.

Conclusion

A VPS-based development and staging environment transforms your deployment workflow from a stressful manual process into a reliable automated pipeline. Docker provides isolation between environments, Nginx routes traffic to the right containers, Git webhooks trigger deployments automatically, and environment variables keep configuration separate from code.

The investment in setting up this infrastructure pays for itself with the first bug caught in staging that would have otherwise reached production. Combined with MassiveGRID's NVMe-backed VPS performance, premium network connectivity, and multiple datacenter locations, your development workflow gets the same reliable infrastructure foundation that your production environment deserves.

Deploy a MassiveGRID VPS and build your development pipeline on infrastructure designed for speed and reliability.