If you followed our Ubuntu VPS security hardening guide, you've set up SSH key authentication, disabled password login, and secured the basics. But SSH is far more than a remote shell — it's a secure transport layer that can tunnel any TCP connection, forward ports in both directions, chain through jump hosts, and proxy your entire web browsing. These advanced features let you access databases, admin panels, and internal services securely without opening additional ports on your firewall.

This guide covers every practical SSH technique beyond basic login: config files for managing multiple servers, local and remote port forwarding, SOCKS proxies, jump hosts, agent forwarding, connection multiplexing, and persistent tunnels with autossh.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

The SSH Config File

Before diving into tunnels and forwarding, set up an SSH config file. If you manage multiple VPS instances, typing ssh -i ~/.ssh/key_prod -p 2222 deploy@192.168.1.50 every time is tedious and error-prone. The SSH config file at ~/.ssh/config lets you define named connections with all their parameters.

# Create or edit the SSH config file on your LOCAL machine
mkdir -p ~/.ssh
chmod 700 ~/.ssh
nano ~/.ssh/config

Add entries for each server:

# Production web server
Host prod-web
    HostName 203.0.113.10
    User deploy
    Port 2222
    IdentityFile ~/.ssh/id_prod
    ServerAliveInterval 60
    ServerAliveCountMax 3

# Staging server
Host staging
    HostName 203.0.113.20
    User deploy
    Port 22
    IdentityFile ~/.ssh/id_staging

# Database server (only accessible via prod-web)
Host prod-db
    HostName 10.0.1.50
    User dbadmin
    Port 22
    IdentityFile ~/.ssh/id_prod
    ProxyJump prod-web

# Development VPS
Host dev
    HostName 198.51.100.5
    User developer
    IdentityFile ~/.ssh/id_dev
    LocalForward 5432 127.0.0.1:5432
    LocalForward 6379 127.0.0.1:6379

# Defaults for all connections
Host *
    AddKeysToAgent yes
    IdentitiesOnly yes
    ServerAliveInterval 60
    Compression yes

Now you can connect with just:

ssh prod-web
ssh staging
ssh prod-db    # Automatically jumps through prod-web
ssh dev        # Automatically sets up port forwarding

Set the correct permissions on the config file:

chmod 600 ~/.ssh/config

Key Config Options Explained

OptionPurposeRecommended Value
IdentityFileWhich SSH key to useSeparate key per server or environment
IdentitiesOnly yesOnly try the specified key, not all keys in agentAlways use in Host * block
ServerAliveIntervalSend keepalive every N seconds60 (prevents idle disconnections)
ServerAliveCountMaxMissed keepalives before disconnect3 (disconnects after 3 minutes of no response)
Compression yesCompress SSH trafficUseful for slow connections
AddKeysToAgent yesAuto-add keys to SSH agent on first useConvenient for passphrase-protected keys

Every MassiveGRID Cloud VPS includes SSH access. SSH tunnels transform that single connection into secure access for databases, admin panels, and any private service running on your server — without opening additional firewall ports.

Local Port Forwarding

Local port forwarding is the most commonly used SSH tunnel. It makes a remote service accessible on your local machine. The traffic flows: your laptop → SSH tunnel → VPS → remote service.

Syntax

ssh -L [local_bind_address:]local_port:remote_host:remote_port user@vps

Example: Access Remote PostgreSQL

Your PostgreSQL database on the VPS listens only on 127.0.0.1:5432 (not exposed to the internet — as it should be). You need to connect to it from your local machine with pgAdmin or DBeaver. See our PostgreSQL installation guide for the initial database setup.

# Forward local port 5432 to the VPS's PostgreSQL
ssh -L 5432:127.0.0.1:5432 prod-web

Now on your local machine, connect your database client to localhost:5432 — the connection is encrypted through the SSH tunnel and arrives at PostgreSQL on the VPS as a local connection.

If you already have PostgreSQL running locally on port 5432, use a different local port:

# Use local port 15432 instead
ssh -L 15432:127.0.0.1:5432 prod-web

# Connect your client to localhost:15432

Example: Access Admin Panel

You're running Portainer (see our Portainer installation guide) on your VPS, bound to 127.0.0.1:9443. Access it through an SSH tunnel:

ssh -L 9443:127.0.0.1:9443 prod-web

# Open in your local browser: https://localhost:9443

Example: Access Multiple Services Simultaneously

You can forward multiple ports in a single SSH connection:

# Forward PostgreSQL, Redis, and Grafana in one connection
ssh -L 5432:127.0.0.1:5432 \
    -L 6379:127.0.0.1:6379 \
    -L 3000:127.0.0.1:3000 \
    prod-web

Background Tunnel (No Shell)

If you only need the tunnel and don't want an interactive shell session:

# -f: go to background after connecting
# -N: don't execute any remote command
ssh -f -N -L 5432:127.0.0.1:5432 prod-web

# The tunnel runs in the background. Find and kill it later:
ps aux | grep "ssh -f -N"
kill 

Remote Port Forwarding

Remote port forwarding is the reverse — it makes a service on your local machine accessible through the VPS. The traffic flows: internet → VPS:port → SSH tunnel → your laptop → local service.

Syntax

ssh -R [remote_bind_address:]remote_port:local_host:local_port user@vps

Example: Expose Local Development Server

You're developing a web application on your laptop running on localhost:3000. You need to share it with a client or test it from a mobile device. Instead of deploying to a staging server, expose it through your VPS:

# On your local machine:
ssh -R 8080:127.0.0.1:3000 prod-web

Now http://your-vps-ip:8080 serves your local development application. But by default, SSH only binds remote forwards to 127.0.0.1 on the remote side. To make it accessible on all interfaces, you need to enable GatewayPorts on the VPS:

# On the VPS, edit sshd_config:
sudo nano /etc/ssh/sshd_config

# Add or change:
GatewayPorts clientspecified

# Restart SSH:
sudo systemctl restart sshd

Then use the bind address explicitly:

# Bind to all interfaces on the VPS
ssh -R 0.0.0.0:8080:127.0.0.1:3000 prod-web

Security warning: Remote port forwarding with GatewayPorts exposes your local service to the internet through your VPS. Make sure UFW allows the port (sudo ufw allow 8080) and that you only keep this tunnel open while actively sharing. Close the SSH connection when you're done.

Example: Webhook Testing

Many services (Stripe, GitHub, Slack) send webhooks to a public URL. During development, you need those webhooks to reach your local machine:

# Expose local port 4000 (your webhook handler) on VPS port 9000
ssh -R 0.0.0.0:9000:127.0.0.1:4000 prod-web

# Configure the webhook provider to send to:
# http://your-vps-ip:9000/webhook/handler

Dynamic Port Forwarding (SOCKS Proxy)

Dynamic port forwarding creates a SOCKS proxy on your local machine that routes all traffic through the VPS. Instead of forwarding specific ports, you forward everything — web browsing, API calls, DNS lookups — through the encrypted SSH tunnel.

Syntax

ssh -D [bind_address:]port user@vps

Setting Up a SOCKS Proxy

# Create a SOCKS5 proxy on local port 1080
ssh -D 1080 -f -N prod-web

Configure your applications to use the SOCKS proxy:

Firefox: Settings → Network Settings → Manual proxy → SOCKS Host: 127.0.0.1, Port: 1080, SOCKS v5. Also check "Proxy DNS when using SOCKS v5".

Chrome (command line):

google-chrome --proxy-server="socks5://127.0.0.1:1080"

curl:

curl --socks5-hostname 127.0.0.1:1080 https://ifconfig.me
# Shows your VPS IP address, not your local IP

Environment variable (affects many CLI tools):

export ALL_PROXY=socks5://127.0.0.1:1080

Use Cases for SOCKS Proxy

Jump Hosts (ProxyJump)

In many architectures, internal servers (databases, application servers) aren't directly reachable from the internet. They sit behind a "bastion" or "jump host" — a single server with both public and private network interfaces. SSH's ProxyJump feature lets you connect through the jump host transparently.

The Architecture

Your laptop  ──── SSH ────►  VPS (bastion)  ──── SSH ────►  Internal server
(internet)                   (public + private IP)           (private IP only)
                             203.0.113.10                    10.0.1.50

Command-Line Jump

# Jump through prod-web to reach the database server
ssh -J prod-web dbadmin@10.0.1.50

# Chain multiple jumps
ssh -J bastion1,bastion2 user@final-destination

Config File Jump

We already set this up in our SSH config earlier:

Host prod-db
    HostName 10.0.1.50
    User dbadmin
    ProxyJump prod-web

Now ssh prod-db automatically jumps through prod-web. You can even use SCP and SFTP through the jump host:

# Copy a file to the internal server via jump host
scp -J prod-web backup.sql dbadmin@10.0.1.50:/tmp/

# Or using config names
scp backup.sql prod-db:/tmp/

Port Forwarding Through Jump Hosts

Combine port forwarding with jump hosts to access internal services:

# Access PostgreSQL on an internal server through the jump host
ssh -J prod-web -L 5432:127.0.0.1:5432 dbadmin@10.0.1.50

# Now connect to localhost:5432 — the connection goes:
# Your laptop → prod-web (jump) → 10.0.1.50 (PostgreSQL)

If using your VPS as a jump host for multiple servers, dedicated resources ensure SSH connections remain responsive. Each jump connection consumes CPU for encryption/decryption — with shared resources, many concurrent SSH sessions could experience latency.

SSH Agent Forwarding

Agent forwarding lets you use your local SSH keys on a remote server without copying the keys to that server. This is useful when you need to clone Git repositories or SSH to a third server from your VPS.

How It Works

# Enable agent forwarding for a connection
ssh -A prod-web

# On the VPS, your local SSH agent is available
# You can clone private repos without deploying keys to the VPS:
git clone git@github.com:yourorg/private-repo.git

In your SSH config:

Host prod-web
    ForwardAgent yes

Security Risks of Agent Forwarding

Agent forwarding has a significant security implication: anyone with root access on the remote server can use your forwarded SSH agent to authenticate as you to any other server your keys have access to. While the forwarding is active, the remote server's root user can use your agent socket to:

When it's safe:

When to avoid:

Safer alternative: Use ProxyJump instead of agent forwarding when possible. ProxyJump doesn't expose your agent on intermediate servers — it simply tunnels the SSH connection through them.

Connection Multiplexing

Every time you open an SSH connection, the client and server perform a TCP handshake, key exchange, and authentication. Connection multiplexing (ControlMaster) reuses an existing connection for subsequent sessions, making new connections near-instantaneous.

Configuration

Add this to your ~/.ssh/config:

Host *
    ControlMaster auto
    ControlPath ~/.ssh/sockets/%r@%h-%p
    ControlPersist 600

Create the socket directory:

mkdir -p ~/.ssh/sockets
chmod 700 ~/.ssh/sockets

How It Works

The speed difference:

# First connection (normal speed)
time ssh prod-web exit
# real    0m0.834s

# Second connection (multiplexed — near instant)
time ssh prod-web exit
# real    0m0.043s

That's a 20x speedup. This makes a noticeable difference when running scripts that SSH to a server multiple times (deployment scripts, backup scripts, monitoring checks).

Managing Multiplexed Connections

# Check the status of a master connection
ssh -O check prod-web
# Master running (pid=12345)

# Gracefully close a master connection
ssh -O exit prod-web

# Force-close a master connection
ssh -O stop prod-web

Persistent Tunnels with autossh

SSH tunnels die when the connection drops — network interruptions, idle timeouts, server reboots. If you rely on SSH tunnels for database access or monitoring, you need them to automatically reconnect. That's what autossh does.

Install autossh

sudo apt install autossh -y

Basic Usage

# Persistent tunnel for PostgreSQL access
autossh -M 0 -f -N -L 5432:127.0.0.1:5432 prod-web

The -M 0 flag disables autossh's built-in monitoring port and relies on SSH's own ServerAliveInterval to detect dead connections (which we configured in the SSH config).

Systemd Service for Persistent Tunnels

For tunnels that should survive reboots, create a systemd service:

sudo cat > /etc/systemd/system/ssh-tunnel-db.service << 'EOF'
[Unit]
Description=Persistent SSH tunnel to production database
After=network-online.target ssh.service
Wants=network-online.target

[Service]
Type=simple
User=deploy
ExecStart=/usr/bin/autossh -M 0 -N \
    -o "ServerAliveInterval=30" \
    -o "ServerAliveCountMax=3" \
    -o "ExitOnForwardFailure=yes" \
    -o "StrictHostKeyChecking=accept-new" \
    -L 5432:127.0.0.1:5432 \
    prod-web
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable ssh-tunnel-db
sudo systemctl start ssh-tunnel-db

# Check status
sudo systemctl status ssh-tunnel-db

Key options in the service:

Practical Pattern: Secure Database Access

Here's the complete pattern for accessing PostgreSQL on your VPS without ever opening port 5432 to the internet. This combines several techniques from this guide, building on the database setup from our PostgreSQL installation guide.

On the VPS: PostgreSQL listens only on 127.0.0.1:5432. No firewall rule needed — the port isn't exposed at all.

# Verify PostgreSQL is only listening locally
sudo ss -tlnp | grep 5432
# Expected: 127.0.0.1:5432

In your local SSH config (~/.ssh/config):

Host prod-web
    HostName 203.0.113.10
    User deploy
    IdentityFile ~/.ssh/id_prod
    LocalForward 5432 127.0.0.1:5432

On your local machine:

# Connect — tunnel is automatically established
ssh prod-web

# In another terminal, use any PostgreSQL client:
psql -h 127.0.0.1 -p 5432 -U myapp -d myapp_production

# Or use a GUI client (pgAdmin, DBeaver) pointed at localhost:5432

This pattern is vastly more secure than opening port 5432 in the firewall and relying on PostgreSQL's pg_hba.conf for authentication. The database is never exposed to port scanners, brute-force attempts, or potential SQL injection from the network layer.

Practical Pattern: Secure Admin Panel Access

Same pattern for any web-based admin panel — Portainer, Grafana, phpMyAdmin, Adminer, or your application's admin interface. Building on our Portainer installation guide:

# SSH config entry with multiple admin panel forwards
Host prod-admin
    HostName 203.0.113.10
    User deploy
    IdentityFile ~/.ssh/id_prod
    LocalForward 9443 127.0.0.1:9443   # Portainer
    LocalForward 3000 127.0.0.1:3000   # Grafana
    LocalForward 8080 127.0.0.1:8080   # Application admin
# Connect once — all admin panels become available locally
ssh prod-admin

# Access in your browser:
# https://localhost:9443  → Portainer
# http://localhost:3000   → Grafana
# http://localhost:8080   → Application admin

No Nginx reverse proxy configuration needed. No SSL certificates for admin subdomains. No additional firewall rules. Just SSH.

SSH Escape Sequences

SSH has built-in escape sequences that are useful for managing tunnels and connections. Press Enter first, then the tilde (~) followed by the command character:

SequenceAction
~.Disconnect (useful when the connection is frozen)
~^ZSuspend SSH and return to local shell
~#List all forwarded connections
~COpen SSH command line (add forwards to an active session)
~?Show all escape sequences

The ~C escape is particularly powerful — it opens an ssh> prompt where you can add port forwards to an existing connection without disconnecting:

# In an active SSH session, press Enter, then ~C
ssh> -L 3306:127.0.0.1:3306
Forwarding port.

# Now local port 3306 is forwarded through the existing connection

Security Best Practices for SSH Tunnels

SSH tunnels are powerful, and with power comes responsibility. Follow these practices:

Troubleshooting SSH Tunnels

Tunnel Connects but Port Forward Doesn't Work

# Check if the local port is actually listening
ss -tlnp | grep 

# Test the forward directly
curl -v http://127.0.0.1:/

# SSH with verbose output to see forwarding details
ssh -v -L 5432:127.0.0.1:5432 prod-web 2>&1 | grep -i forward

"Address Already in Use" Error

# Find what's using the port
sudo ss -tlnp | grep :

# Kill the process or use a different local port
ssh -L 15432:127.0.0.1:5432 prod-web  # Use 15432 instead

Tunnel Drops Frequently

# Increase keepalive frequency in SSH config
Host prod-web
    ServerAliveInterval 30   # Send keepalive every 30 seconds
    ServerAliveCountMax 5    # Allow 5 missed keepalives

# Or use autossh for automatic reconnection
autossh -M 0 -f -N -L 5432:127.0.0.1:5432 prod-web

"channel N: open failed: administratively prohibited"

This means the SSH server is rejecting the tunnel. Check the server's SSH configuration:

# On the VPS, verify tunneling is allowed
sudo sshd -T | grep -E "allowtcpforwarding|gatewayports|permittunnel"

# AllowTcpForwarding should be "yes" (default)
# If set to "no", edit /etc/ssh/sshd_config:
AllowTcpForwarding yes

Summary

SSH is much more than ssh user@server. Here's what you've learned to do:

TechniqueUse CaseCommand Pattern
SSH ConfigManage multiple servers~/.ssh/config file
Local ForwardAccess remote databases and admin panelsssh -L local:remote user@host
Remote ForwardExpose local dev servers, webhook testingssh -R remote:local user@host
Dynamic ForwardSOCKS proxy for secure browsingssh -D 1080 user@host
ProxyJumpAccess internal servers through a bastionssh -J bastion user@internal
Agent ForwardingUse local keys on remote serversssh -A user@host (use cautiously)
MultiplexingSpeed up repeat connectionsControlMaster auto in config
autosshPersistent tunnels that auto-reconnectautossh -M 0 -f -N -L ...

These patterns work on any MassiveGRID server — Cloud VPS, Dedicated VPS, or Managed Dedicated. Start with the SSH config file to organize your connections, then add tunnels as needed. Every tunnel you create is one fewer port you need to expose in your firewall — and that's always a security win.