If you've ever waited 10 minutes for a GitHub Actions workflow to spin up a runner, install dependencies, and finally start your build, you know the frustration. GitHub-hosted runners are convenient, but they're shared, ephemeral, and limited — 2,000 free minutes per month for private repositories, then $0.008/minute after that. Self-hosted runners on your own Ubuntu VPS eliminate all three problems. Your builds start instantly on hardware you control, with tools pre-installed and persistent caches that survive between runs.
In this guide, you'll set up a production-ready GitHub Actions self-hosted runner on an Ubuntu VPS. We'll cover registration, systemd service configuration, build tool installation, security hardening, Docker-in-Docker builds, runner labels, monitoring, and running multiple runners on a single machine.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Self-Hosted Runners
GitHub-hosted runners are virtual machines that GitHub spins up on demand. They work, but they come with real limitations that affect teams building serious software:
- Usage limits: Free tier gets 2,000 minutes/month for private repos. Teams routinely exceed this.
- Cold starts: Every job starts from a fresh VM. Dependencies are downloaded and installed from scratch every time.
- Limited hardware: Standard runners get 2 vCPUs and 7 GB RAM. Large runners cost significantly more.
- No private network access: GitHub-hosted runners can't reach services inside your private network without exposing them to the internet or setting up complex tunnels.
- No persistent caches: While GitHub provides caching actions, they're limited to 10 GB per repo and require upload/download time.
Self-hosted runners solve every one of these problems. Your runner stays online, pre-loaded with build tools, connected to your private network, with the full disk serving as a persistent cache.
Cost Comparison: GitHub-Hosted vs Self-Hosted
Let's look at real numbers for a team running 100 builds per month, averaging 8 minutes each (800 total minutes):
| Factor | GitHub-Hosted (Linux) | Self-Hosted on VPS |
|---|---|---|
| Monthly cost (800 min) | Free tier covers it (2,000 min) | $7.99/mo (2 vCPU / 4 GB VPS) |
| Monthly cost (5,000 min) | $24/mo overage | $7.99/mo (same VPS) |
| Monthly cost (20,000 min) | $144/mo overage | $7.99/mo (same VPS) |
| Cold start time | 15–45 seconds | Near zero |
| Dependency install | Every run (1–5 min) | Pre-installed (0 sec) |
| Docker layer cache | None (unless using cache action) | Persistent on disk |
| Private network access | No (requires tunnels) | Yes (VPS is on your network) |
| Custom hardware | Fixed (2 vCPU / 7 GB) | Scalable (your choice) |
The cost advantage is dramatic for teams with heavy CI/CD usage. At 20,000 minutes per month, a self-hosted runner saves over $130/month — and it actually builds faster because everything is pre-cached.
Prerequisites
You'll need:
- An Ubuntu 24.04 VPS with at least 2 vCPU, 4 GB RAM, and 40 GB SSD — deploy a MassiveGRID Cloud VPS (a 2 vCPU / 4 GB RAM instance runs a self-hosted GitHub Actions runner with room for build tools, Docker, and test dependencies)
- SSH access to the VPS (initial setup guide)
- A GitHub account with admin access to a repository or organization
- Basic firewall configured (security hardening guide)
Creating and Registering the Runner
Step 1: Create a Dedicated User
Never run the GitHub Actions runner as root. Create a dedicated user with limited permissions:
sudo useradd -m -s /bin/bash github-runner
sudo passwd -l github-runner
sudo usermod -aG docker github-runner
The -l flag locks the password (no direct login). We add the user to the docker group so it can build container images without sudo. We'll install Docker shortly.
Step 2: Get the Registration Token
In your GitHub repository, go to Settings → Actions → Runners → New self-hosted runner. GitHub generates a time-limited registration token. Alternatively, use the GitHub CLI:
# Install GitHub CLI
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update && sudo apt install gh -y
# Authenticate
gh auth login
# Get registration token for a repository
gh api -X POST repos/YOUR_ORG/YOUR_REPO/actions/runners/registration-token --jq '.token'
For organization-level runners (shared across repos):
gh api -X POST orgs/YOUR_ORG/actions/runners/registration-token --jq '.token'
Step 3: Download and Configure the Runner
Switch to the runner user and download the latest runner package:
# Switch to runner user
sudo su - github-runner
# Create a directory for the runner
mkdir actions-runner && cd actions-runner
# Download the latest runner (check GitHub for current version)
curl -o actions-runner-linux-x64-2.321.0.tar.gz -L \
https://github.com/actions/runner/releases/download/v2.321.0/actions-runner-linux-x64-2.321.0.tar.gz
# Verify the hash (replace with actual hash from GitHub)
echo "HASH_FROM_GITHUB actions-runner-linux-x64-2.321.0.tar.gz" | shasum -a 256 -c
# Extract
tar xzf ./actions-runner-linux-x64-2.321.0.tar.gz
Step 4: Register the Runner
./config.sh \
--url https://github.com/YOUR_ORG/YOUR_REPO \
--token YOUR_REGISTRATION_TOKEN \
--name "ubuntu-vps-runner-01" \
--labels "self-hosted,linux,x64,ubuntu-24.04,vps" \
--work "_work" \
--replace
The --labels flag is important — it lets you target this specific runner from your workflows. The --work flag sets the working directory for job execution. The --replace flag allows re-registration if a runner with the same name already exists.
You'll see output confirming the runner is configured:
√ Runner successfully added
√ Runner connection is good
√ Settings Saved.
Installing as a Systemd Service
Running the runner manually with ./run.sh works for testing, but you need it to survive reboots and recover from crashes. The runner package includes a service installer:
# Exit back to your sudo user
exit
# Install the service (runs as github-runner user)
cd /home/github-runner/actions-runner
sudo ./svc.sh install github-runner
# Start the service
sudo ./svc.sh start
# Check status
sudo ./svc.sh status
This creates a systemd service at /etc/systemd/system/actions.runner.*.service. You can also manage it directly with systemctl:
# Check service status
sudo systemctl status actions.runner.*.service
# View logs
sudo journalctl -u actions.runner.*.service -f
# Enable on boot (the svc.sh install already does this)
sudo systemctl enable actions.runner.*.service
For more on managing systemd services, see our systemd services guide.
Installing Persistent Build Tools
This is where self-hosted runners shine. Unlike GitHub-hosted runners that install tools every run, you install them once and they persist forever.
Node.js via nvm
sudo su - github-runner
# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
# Install Node.js LTS
nvm install --lts
nvm alias default lts/*
# Install common global tools
npm install -g yarn pnpm typescript
Python with pip and venv
sudo apt install -y python3 python3-pip python3-venv python3-dev
# Install common tools
sudo su - github-runner
pip3 install --user pytest black flake8 mypy
Docker
If you haven't installed Docker yet, follow our Docker installation guide. The key step is ensuring the runner user is in the docker group:
# Install Docker
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add runner user to docker group
sudo usermod -aG docker github-runner
# Restart the runner service so it picks up the new group
sudo ./svc.sh stop
sudo ./svc.sh start
Additional Build Tools
# Build essentials
sudo apt install -y build-essential git curl wget unzip jq
# Go (for Go projects)
sudo snap install go --classic
# Rust (for Rust projects)
sudo su - github-runner
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
Security Considerations
Self-hosted runners execute code from your repository. If your repository accepts pull requests from forks, a malicious PR could run arbitrary code on your VPS. This is the single biggest security concern.
Rule 1: Never Use Self-Hosted Runners on Public Repos
For public repositories, anyone can fork the repo, modify the workflow, and submit a PR that runs malicious code on your runner. GitHub strongly advises against this, and so do we.
Rule 2: Restrict Workflow Permissions
In your repository settings, go to Settings → Actions → General and set:
- Fork pull request workflows: "Require approval for all outside collaborators"
- Workflow permissions: "Read repository contents and packages permissions"
Rule 3: Isolate the Runner User
The dedicated github-runner user should have minimal permissions:
# Verify the user has no sudo access
sudo -l -U github-runner
# Should show: "User github-runner is not allowed to run sudo"
# Set restrictive home directory permissions
sudo chmod 750 /home/github-runner
# Limit what the user can see
sudo setfacl -m u:github-runner:--- /etc/shadow
Rule 4: Network Segmentation
Use UFW to restrict what the runner can access. For a full deep dive on firewall rules, see our UFW advanced rules guide.
# Allow outbound to GitHub (required for runner communication)
# GitHub Actions IPs: https://api.github.com/meta
sudo ufw allow out to 140.82.112.0/20
sudo ufw allow out to 143.55.64.0/20
# Allow outbound HTTPS (for package downloads during builds)
sudo ufw allow out 443/tcp
# Allow outbound DNS
sudo ufw allow out 53/udp
Rule 5: Clean Up After Jobs
By default, the runner cleans the workspace between jobs. You can enforce this in your workflow:
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
with:
clean: true
# ... your build steps ...
- name: Cleanup
if: always()
run: |
docker system prune -f
rm -rf $RUNNER_WORKSPACE/*
Running Alongside Production Workloads
If your VPS also hosts production services (web server, database, application), CI/CD builds can compete for resources. Compilation, testing, and Docker image building routinely spike CPU to 100% for minutes at a time.
Builds affecting production? CI/CD builds are inherently bursty — compilation, testing, and Docker image building spike CPU to 100%. Dedicated CPU delivers predictable build times — from $19.80/mo.
If you keep both on the same VPS, use systemd resource controls to limit the runner:
sudo systemctl edit actions.runner.*.service
Add these directives in the override file:
[Service]
CPUQuota=150%
MemoryMax=2G
IOWeight=50
Nice=10
This limits the runner to 1.5 CPU cores (on a 2-core VPS, that's 75% of total), 2 GB RAM, lower I/O priority, and nice value 10 (lower scheduling priority). Production services get priority.
sudo systemctl daemon-reload
sudo systemctl restart actions.runner.*.service
Using Labels to Target Specific Runners
Labels are how workflows select runners. When you registered the runner, you assigned labels like self-hosted,linux,x64,ubuntu-24.04,vps. Use these in your workflow files:
# .github/workflows/build.yml
name: Build and Test
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: [self-hosted, linux, x64]
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build
run: npm run build
You can add custom labels for different purposes:
# Add labels to an existing runner
cd /home/github-runner/actions-runner
./config.sh --labels "docker,node-20,high-memory"
Then target them in workflows:
jobs:
docker-build:
runs-on: [self-hosted, docker]
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
unit-tests:
runs-on: [self-hosted, node-20]
steps:
- uses: actions/checkout@v4
- run: npm ci && npm test
Docker-in-Docker for Containerized Builds
Many CI/CD pipelines build Docker images. Since the runner user is in the docker group, this works natively:
jobs:
build-and-push:
runs-on: [self-hosted, linux]
steps:
- uses: actions/checkout@v4
- name: Log in to Docker Hub
run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
- name: Build image
run: |
docker build \
--cache-from myapp:latest \
-t myapp:${{ github.sha }} \
-t myapp:latest \
.
- name: Push image
run: |
docker push myapp:${{ github.sha }}
docker push myapp:latest
The key advantage here is the --cache-from flag. On a self-hosted runner, Docker layer caches persist between builds. A build that takes 8 minutes on GitHub-hosted runners might complete in 30 seconds when only a few layers changed.
Multi-Stage Build Caching
For multi-stage Dockerfiles, cache every stage explicitly:
jobs:
build:
runs-on: [self-hosted, linux]
steps:
- uses: actions/checkout@v4
- name: Build with BuildKit caching
run: |
export DOCKER_BUILDKIT=1
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from myapp:builder \
--cache-from myapp:latest \
--target builder \
-t myapp:builder \
.
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from myapp:builder \
--cache-from myapp:latest \
-t myapp:${{ github.sha }} \
-t myapp:latest \
.
Preventing Docker Disk Bloat
Docker images and build caches accumulate fast. Schedule cleanup:
# Add to crontab for github-runner user
sudo -u github-runner crontab -e
# Clean up Docker daily at 3 AM
0 3 * * * docker system prune -f --filter "until=72h"
0 3 * * 0 docker builder prune -f --keep-storage=10G
For more on scheduling automated tasks, see our cron jobs guide.
Monitoring Runner Performance
You need visibility into whether your runner is keeping up with demand. A few key metrics:
Job Queue Time
If jobs sit in "Queued" state, your runner is overloaded. Check from the GitHub API:
# List queued workflow runs
gh api repos/YOUR_ORG/YOUR_REPO/actions/runs \
--jq '.workflow_runs[] | select(.status == "queued") | {id: .id, name: .name, created: .created_at}'
System Resource Usage During Builds
Monitor CPU, RAM, and disk during builds. Create a simple monitoring script:
#!/bin/bash
# /home/github-runner/monitor-runner.sh
LOG_FILE="/var/log/runner-metrics.log"
while true; do
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
CPU=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}')
MEM=$(free -m | awk 'NR==2{printf "%.1f", $3*100/$2}')
DISK=$(df -h /home/github-runner | awk 'NR==2{print $5}' | tr -d '%')
DOCKER_DISK=$(docker system df --format '{{.Size}}' 2>/dev/null | head -1)
echo "$TIMESTAMP cpu=$CPU% mem=$MEM% disk=$DISK% docker=$DOCKER_DISK" >> "$LOG_FILE"
sleep 60
done
Run this as a systemd service alongside the runner. For comprehensive monitoring setup, see our VPS monitoring guide.
GitHub Runner Status Check
# Check runner status via API
gh api repos/YOUR_ORG/YOUR_REPO/actions/runners \
--jq '.runners[] | {name: .name, status: .status, busy: .busy, labels: [.labels[].name]}'
Expected output:
{
"name": "ubuntu-vps-runner-01",
"status": "online",
"busy": false,
"labels": ["self-hosted", "linux", "x64", "ubuntu-24.04", "vps"]
}
Running Multiple Runners on One VPS
A single runner processes one job at a time. If you need parallelism — for example, running tests on a PR while building a Docker image for main — install multiple runners:
# Create directories for each runner
sudo su - github-runner
mkdir -p runner-01 runner-02
# Download and extract runner in each directory
for dir in runner-01 runner-02; do
cd /home/github-runner/$dir
curl -o actions-runner-linux-x64-2.321.0.tar.gz -L \
https://github.com/actions/runner/releases/download/v2.321.0/actions-runner-linux-x64-2.321.0.tar.gz
tar xzf actions-runner-linux-x64-2.321.0.tar.gz
done
# Configure each with a unique name
cd /home/github-runner/runner-01
./config.sh --url https://github.com/YOUR_ORG/YOUR_REPO \
--token TOKEN_1 \
--name "ubuntu-vps-runner-01" \
--labels "self-hosted,linux,x64,runner-01" \
--work "_work"
cd /home/github-runner/runner-02
./config.sh --url https://github.com/YOUR_ORG/YOUR_REPO \
--token TOKEN_2 \
--name "ubuntu-vps-runner-02" \
--labels "self-hosted,linux,x64,runner-02" \
--work "_work"
Install each as a separate systemd service:
# Exit to sudo user
exit
cd /home/github-runner/runner-01
sudo ./svc.sh install github-runner
sudo ./svc.sh start
cd /home/github-runner/runner-02
sudo ./svc.sh install github-runner
sudo ./svc.sh start
Each runner gets its own systemd service, its own work directory, and processes jobs independently. On a 4 vCPU / 8 GB RAM VPS, two runners can handle parallel builds comfortably.
Resource Limits for Multiple Runners
When running multiple runners, set resource limits to prevent one build from starving the other:
# For each runner service
sudo systemctl edit actions.runner.YOUR_ORG-YOUR_REPO.ubuntu-vps-runner-01.service
[Service]
CPUQuota=100%
MemoryMax=3G
sudo systemctl edit actions.runner.YOUR_ORG-YOUR_REPO.ubuntu-vps-runner-02.service
[Service]
CPUQuota=100%
MemoryMax=3G
Complete Workflow Example
Here's a production workflow that builds, tests, and deploys a Node.js application using a self-hosted runner:
# .github/workflows/deploy.yml
name: Build, Test, and Deploy
on:
push:
branches: [main]
jobs:
test:
runs-on: [self-hosted, linux]
steps:
- uses: actions/checkout@v4
- name: Use Node.js
run: |
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
node --version
npm --version
- name: Install dependencies
run: |
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
npm ci
- name: Run linter
run: |
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
npm run lint
- name: Run tests
run: |
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
npm test
build-and-push:
needs: test
runs-on: [self-hosted, docker]
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build \
--cache-from myapp:latest \
-t myapp:${{ github.sha }} \
-t myapp:latest \
.
- name: Push to registry
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login registry.example.com -u deploy --password-stdin
docker tag myapp:${{ github.sha }} registry.example.com/myapp:${{ github.sha }}
docker tag myapp:latest registry.example.com/myapp:latest
docker push registry.example.com/myapp:${{ github.sha }}
docker push registry.example.com/myapp:latest
deploy:
needs: build-and-push
runs-on: [self-hosted, linux]
steps:
- name: Deploy to production
run: |
ssh deploy@production-server \
"docker pull registry.example.com/myapp:${{ github.sha }} && \
docker compose -f /opt/myapp/docker-compose.yml up -d"
Note: On self-hosted runners, nvm isn't automatically loaded in non-interactive shells. Source it explicitly in each step, or add it to the runner user's
.bash_profile.
Updating the Runner
GitHub periodically releases new runner versions. The runner auto-updates by default, but if you've disabled auto-update or need to update manually:
# Stop the service
sudo ./svc.sh stop
# As the runner user
sudo su - github-runner
cd actions-runner
# Download the new version
curl -o actions-runner-linux-x64-NEW_VERSION.tar.gz -L \
https://github.com/actions/runner/releases/download/vNEW_VERSION/actions-runner-linux-x64-NEW_VERSION.tar.gz
# Extract (overwrites existing files)
tar xzf actions-runner-linux-x64-NEW_VERSION.tar.gz
# Exit and restart
exit
sudo ./svc.sh start
Troubleshooting Common Issues
Runner Shows "Offline" in GitHub
# Check if the service is running
sudo systemctl status actions.runner.*.service
# Check runner logs
sudo journalctl -u actions.runner.*.service --since "10 minutes ago"
# Common fix: restart the service
sudo systemctl restart actions.runner.*.service
Permission Denied Errors in Builds
# Check workspace ownership
ls -la /home/github-runner/actions-runner/_work/
# Fix ownership if needed
sudo chown -R github-runner:github-runner /home/github-runner/actions-runner/_work/
Docker Socket Permission Denied
# Verify docker group membership
id github-runner
# If docker group is missing, add and restart
sudo usermod -aG docker github-runner
sudo systemctl restart actions.runner.*.service
Disk Space Running Low
# Check disk usage
df -h /home/github-runner
# Clean up old workspaces
sudo rm -rf /home/github-runner/actions-runner/_work/*/
# Clean Docker
docker system prune -af --volumes
Managed CI/CD Infrastructure
Running self-hosted runners gives you speed and control, but it also means you're responsible for updates, security patches, monitoring, and disk management. If you'd rather focus on your code and leave the infrastructure to experts:
Want fully managed CI/CD infrastructure? MassiveGRID Managed Dedicated Cloud Servers — we handle server provisioning, security hardening, monitoring, updates, and 24/7 incident response. You focus on shipping code.
Summary
Self-hosted GitHub Actions runners on an Ubuntu VPS deliver faster builds, unlimited minutes, persistent caches, and private network access — at a fraction of the cost of GitHub-hosted runners. Here's a quick reference for the setup:
| Step | Command / Action |
|---|---|
| Create runner user | sudo useradd -m -s /bin/bash github-runner |
| Get registration token | GitHub UI or gh api |
| Download runner | curl -o ... -L https://github.com/actions/runner/releases/... |
| Configure | ./config.sh --url ... --token ... --name ... --labels ... |
| Install service | sudo ./svc.sh install github-runner |
| Start service | sudo ./svc.sh start |
| Install build tools | Node.js, Python, Docker, Go, Rust — persistent |
| Target in workflow | runs-on: [self-hosted, linux, x64] |
Your CI/CD pipeline is only as reliable as the infrastructure it runs on. Start with a MassiveGRID Cloud VPS for single-runner setups, upgrade to a Dedicated VPS when build times become unpredictable, or go fully managed with a Managed Dedicated Cloud Server when you want someone else to handle the infrastructure entirely.