Varnish Cache is a specialized HTTP accelerator designed to sit in front of your web server and serve cached responses directly from RAM. Unlike general-purpose caching layers, Varnish was built from the ground up for one purpose: making websites extremely fast by eliminating redundant backend processing. When a visitor requests a page that Varnish has already cached, the response is served from memory in microseconds rather than milliseconds, bypassing your application server, database queries, and PHP processing entirely. For high-traffic sites running on an Ubuntu VPS, Varnish can transform a struggling server into one that handles thousands of concurrent requests without breaking a sweat.
The performance gains are dramatic. A typical WordPress site without caching might handle 50 to 100 concurrent users before response times degrade. Add Varnish with even 1 GB of allocated cache memory, and that same server can comfortably serve 500 or more concurrent users with sub-10ms response times for cached content. This guide walks through installing and configuring Varnish Cache on Ubuntu 24.04, including the complete port stack reconfiguration, VCL configuration language basics, WordPress-specific tuning, SSL termination architecture, and cache invalidation strategies.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
What Varnish Cache Does: The HTTP Accelerator Concept
Varnish Cache operates as a reverse proxy that intercepts HTTP requests before they reach your origin server. When a request arrives, Varnish checks its in-memory cache. If a valid cached response exists (a cache hit), Varnish returns it directly without contacting the backend at all. If no cached version exists (a cache miss), Varnish forwards the request to the backend, stores the response in memory, and returns it to the client. Subsequent requests for the same resource are served from RAM.
The key distinction from other caching approaches is that Varnish stores complete HTTP responses in virtual memory, leveraging the operating system's memory management to keep frequently accessed objects in physical RAM. This means cached responses are served at memory speed rather than disk speed. Varnish uses its own domain-specific configuration language called VCL (Varnish Configuration Language), which compiles to native C code at startup, giving it extraordinary flexibility without sacrificing performance.
For a typical dynamic website, Varnish dramatically reduces the load on your application stack. Instead of executing PHP code, querying MySQL, and rendering templates for every request, those operations happen once per cache lifetime. Varnish handles everything else, freeing your backend resources for the requests that genuinely require dynamic processing — logged-in user pages, form submissions, API calls, and administrative functions.
Varnish vs Nginx Caching vs Redis Page Cache
Before committing to Varnish, it helps to understand how it compares to other caching approaches you might already be using or considering.
Nginx FastCGI Cache is built into Nginx and caches the output of PHP-FPM on disk or in a tmpfs mount. It is simpler to configure and works well for moderate traffic. However, it lacks Varnish's sophisticated cache invalidation mechanisms, its Grace mode (serving stale content while fetching fresh copies), and the granular control that VCL provides. Nginx caching is typically configured through directives in the Nginx configuration rather than a dedicated caching language.
Redis as a full-page cache (often via WordPress plugins like WP Redis or through application-level caching) stores rendered pages in a Redis key-value store. Redis is excellent for object caching and session storage, and it can be used for page caching as well. However, Redis page caching typically operates at the application level, meaning PHP still needs to initialize and check the cache. Varnish intercepts requests before they ever reach the application. For a detailed guide on setting up Redis, see our walkthrough on installing Redis caching on Ubuntu VPS.
Varnish Cache is the most performant option for full-page caching. It excels when you need to serve large volumes of anonymous traffic, when you need fine-grained cache control through VCL, when you need features like grace mode, health checks, and sophisticated purge mechanisms, or when you want to offload as much work as possible from your application stack. The tradeoff is increased architectural complexity — Varnish does not handle SSL natively, so you need an SSL termination layer in front of it.
The ideal stack for a high-traffic site often combines all three: Varnish as the front-line HTTP accelerator for full-page caching, Redis for application-level object caching and session storage, and Nginx as both the SSL terminator and the backend web server.
Architecture: Varnish in Front of Nginx
The standard production architecture places Varnish between the client and your Nginx web server. The request flow looks like this:
Client → Nginx (SSL on :443) → Varnish (:80) → Nginx (backend on :8080) → PHP-FPM
This is commonly called the "SSL sandwich" because Nginx wraps around Varnish on both sides. The front-facing Nginx instance handles SSL termination and forwards decrypted HTTP requests to Varnish on port 80. Varnish processes the request against its cache and, if needed, forwards it to the backend Nginx instance listening on port 8080, which serves the actual website content through PHP-FPM or static files.
For HTTP-only setups (development or sites behind a separate load balancer handling SSL), the architecture simplifies to:
Client → Varnish (:80) → Nginx (backend on :8080)
This guide covers both configurations, starting with the simpler HTTP setup and then adding the SSL termination layer.
Prerequisites
Before installing Varnish, you need a working Nginx setup. This guide assumes you have Nginx installed and serving your site on port 80. If you are starting from scratch, follow our guide on setting up Nginx as a reverse proxy on Ubuntu VPS first.
You should have root or sudo access to your Ubuntu 24.04 VPS, at least 4 GB of RAM (to allocate 1 GB to Varnish while leaving sufficient memory for Nginx, PHP-FPM, and your database), and a functioning website served by Nginx on port 80.
Verify your current Nginx setup is working before making any changes:
sudo systemctl status nginx
curl -I http://localhost
You should see a 200 OK response with Nginx listed as the server. Confirm your site loads correctly in a browser. We are about to reassign ports, so having a known-good starting point is essential.
Installing Varnish on Ubuntu 24.04
Ubuntu 24.04 includes Varnish in its default repositories, but it may not be the latest version. For production environments, the Varnish Cache project provides an official package repository with current releases. We will use the official repository to get Varnish 7.x:
sudo apt update
sudo apt install -y debian-archive-keyring curl gnupg
curl -s https://packagecloud.io/varnishcache/varnish75/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/varnish-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish75/ubuntu/ noble main" | sudo tee /etc/apt/sources.list.d/varnish.list
sudo apt update
sudo apt install -y varnish
Verify the installation:
varnishd -V
You should see the Varnish version output. Do not start Varnish yet — we need to reconfigure the port stack first.
Port Stack Reconfiguration: Varnish on :80, Nginx on :8080
The most critical step in deploying Varnish is reassigning ports. Currently, Nginx listens on port 80. We need Varnish to take over port 80 (receiving incoming HTTP requests) and move Nginx to port 8080 (serving as the backend).
Step 1: Reconfigure Nginx to Listen on Port 8080
Edit your Nginx server block configuration. If your site configuration is in /etc/nginx/sites-available/your-site:
sudo nano /etc/nginx/sites-available/your-site
Change the listen directives from port 80 to 8080:
server {
listen 8080;
listen [::]:8080;
server_name your-domain.com www.your-domain.com;
root /var/www/your-site;
index index.php index.html;
# ... rest of your configuration
}
If you have a default server block in /etc/nginx/sites-enabled/default that also listens on port 80, either disable it or change its port as well:
sudo rm /etc/nginx/sites-enabled/default
Test and reload Nginx:
sudo nginx -t
sudo systemctl reload nginx
Verify Nginx is now on port 8080:
curl -I http://localhost:8080
Step 2: Configure Varnish to Listen on Port 80
On Ubuntu 24.04 with systemd, Varnish's listening port is configured via a systemd override. Create an override file:
sudo systemctl edit varnish
Add the following content between the comment markers:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
-a :80 \
-a localhost:8443,proxy \
-f /etc/varnish/default.vcl \
-s malloc,1G
The ExecStart= empty line clears the default ExecStart before defining the new one. The -a :80 flag tells Varnish to accept connections on port 80. The -s malloc,1G allocates 1 GB of RAM for the cache storage. We will discuss memory allocation sizing later in this guide.
Step 3: Configure the Backend in VCL
Edit the default VCL configuration:
sudo nano /etc/varnish/default.vcl
Set the backend to point to Nginx on port 8080:
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 90s;
.between_bytes_timeout = 2s;
}
Step 4: Start Varnish
sudo systemctl daemon-reload
sudo systemctl enable varnish
sudo systemctl start varnish
Verify the port stack:
sudo ss -tlnp | grep -E ':(80|8080)\b'
You should see Varnish on port 80 and Nginx on port 8080. Test with curl:
curl -I http://localhost
Look for the Via header in the response — it should contain varnish, confirming requests are flowing through Varnish to Nginx.
VCL Basics: vcl_recv, vcl_backend_response, vcl_deliver
VCL (Varnish Configuration Language) is what makes Varnish extraordinarily powerful. VCL defines how Varnish handles each request through a series of subroutines that execute at different stages of request processing. Understanding the three core subroutines is essential.
vcl_recv runs when a request is received from the client, before Varnish checks the cache. This is where you decide whether to look up the request in cache, pass it directly to the backend, or modify the request. Common tasks include normalizing URLs, stripping tracking query parameters, deciding which cookies matter, and routing requests.
vcl_backend_response runs after Varnish receives a response from the backend, before it stores the response in cache. This is where you set cache durations (TTL), decide whether a response should be cached at all, and modify response headers. You control how long objects live in cache and configure grace periods for serving stale content.
vcl_deliver runs just before Varnish sends a response to the client. This is where you add or remove response headers, set debugging information (like whether the response was a cache hit or miss), and clean up any internal headers you do not want exposed.
The request lifecycle flows through these subroutines in order: vcl_recv → cache lookup → (on miss) backend fetch → vcl_backend_response → cache store → vcl_deliver. On a cache hit, the flow skips directly from the lookup to vcl_deliver.
WordPress VCL: Cookie Handling, Admin Bypass, and Static Files
WordPress is notoriously unfriendly to HTTP caching out of the box. It sets cookies on nearly every response, and Varnish's default behavior is to skip caching for any request or response that includes cookies. A well-tuned WordPress VCL handles this by stripping unnecessary cookies, bypassing cache for admin and logged-in users, and ensuring static assets are cached aggressively.
Here is a production-ready WordPress VCL configuration:
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 90s;
.between_bytes_timeout = 2s;
.probe = {
.url = "/";
.timeout = 3s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_recv {
# Pass WordPress admin and login pages
if (req.url ~ "^/wp-(admin|login|cron)" || req.url ~ "preview=true") {
return (pass);
}
# Pass WooCommerce dynamic pages
if (req.url ~ "^/(cart|my-account|checkout|addons|store)") {
return (pass);
}
# Pass POST requests
if (req.method == "POST") {
return (pass);
}
# Pass requests with authentication cookies
if (req.http.Cookie ~ "wordpress_logged_in_|wordpress_sec_|woocommerce_") {
return (pass);
}
# Strip all cookies for static files
if (req.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|webp|svg|woff|woff2|ttf|eot)(\?.*)?$") {
unset req.http.Cookie;
return (hash);
}
# Strip tracking and analytics cookies for cacheable pages
if (req.http.Cookie) {
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_ga[^=]*|_gid|_gat|__utm[^=]*|_fbp|_fbc|mp_[^=]*)=[^;]*", "");
set req.http.Cookie = regsuball(req.http.Cookie, "^;\s*", "");
if (req.http.Cookie ~ "^\s*$") {
unset req.http.Cookie;
}
}
# Strip WordPress test and comment cookies for anonymous visitors
if (req.http.Cookie) {
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(wp-settings-[^=]*|wp-settings-time-[^=]*|wordpress_test_cookie|comment_author[^=]*)=[^;]*", "");
set req.http.Cookie = regsuball(req.http.Cookie, "^;\s*", "");
if (req.http.Cookie ~ "^\s*$") {
unset req.http.Cookie;
}
}
return (hash);
}
sub vcl_backend_response {
# Cache static files for 30 days
if (bereq.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|webp|svg|woff|woff2|ttf|eot)(\?.*)?$") {
set beresp.ttl = 30d;
unset beresp.http.Set-Cookie;
}
# Cache HTML pages for 10 minutes
if (beresp.http.Content-Type ~ "text/html") {
set beresp.ttl = 10m;
set beresp.grace = 24h;
unset beresp.http.Set-Cookie;
}
# Do not cache 5xx errors
if (beresp.status >= 500) {
set beresp.uncacheable = true;
set beresp.ttl = 30s;
}
return (deliver);
}
sub vcl_deliver {
# Add hit/miss header for debugging
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
} else {
set resp.http.X-Cache = "MISS";
}
# Remove Varnish internal headers
unset resp.http.X-Varnish;
unset resp.http.Via;
return (deliver);
}
This configuration ensures that anonymous visitors get fully cached responses while logged-in administrators, WooCommerce customers with active sessions, and POST requests always reach the backend. The grace period of 24 hours means that if the backend goes down temporarily, Varnish will continue serving slightly stale content rather than returning errors.
Generic VCL for Non-WordPress Sites
For static sites, custom applications, or other CMS platforms, you need a simpler VCL that does not include WordPress-specific cookie handling. Here is a generic VCL suitable for most dynamic web applications:
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 60s;
.between_bytes_timeout = 2s;
}
sub vcl_recv {
# Pass non-GET/HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Pass requests with Authorization header
if (req.http.Authorization) {
return (pass);
}
# Strip cookies for static assets
if (req.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|webp|svg|woff|woff2|ttf|eot|pdf|zip)(\?.*)?$") {
unset req.http.Cookie;
return (hash);
}
# Strip common tracking cookies
if (req.http.Cookie) {
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_ga[^=]*|_gid|_gat|__utm[^=]*)=[^;]*", "");
set req.http.Cookie = regsuball(req.http.Cookie, "^;\s*", "");
if (req.http.Cookie ~ "^\s*$") {
unset req.http.Cookie;
}
}
# Pass if session cookies remain
if (req.http.Cookie) {
return (pass);
}
return (hash);
}
sub vcl_backend_response {
# Cache static assets for 7 days
if (bereq.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|webp|svg|woff|woff2)(\?.*)?$") {
set beresp.ttl = 7d;
unset beresp.http.Set-Cookie;
}
# Default cache time for HTML
if (beresp.http.Content-Type ~ "text/html") {
set beresp.ttl = 5m;
set beresp.grace = 1h;
}
# Respect Cache-Control from backend
if (beresp.http.Cache-Control ~ "no-cache|no-store|private") {
set beresp.uncacheable = true;
set beresp.ttl = 120s;
}
return (deliver);
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
return (deliver);
}
This generic configuration respects the backend's Cache-Control headers for private or uncacheable content, strips only tracking cookies (leaving session cookies intact to ensure authenticated users always hit the backend), and applies reasonable TTLs for static and dynamic content.
SSL Sandwich: Nginx :443 → Varnish :80 → Nginx :8080
Varnish does not handle SSL/TLS natively. For HTTPS sites (which should be all production sites), you need an SSL termination layer in front of Varnish. The standard approach uses Nginx as an SSL-terminating reverse proxy that forwards decrypted requests to Varnish.
Add a new Nginx server block for SSL termination. Create or edit /etc/nginx/sites-available/your-site-ssl:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your-domain.com www.your-domain.com;
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://127.0.0.1:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
server {
listen 80;
listen [::]:80;
server_name your-domain.com www.your-domain.com;
return 301 https://$server_name$request_uri;
}
Wait — there is a port conflict. Both this SSL proxy and Varnish want port 80. The solution is to move Varnish to a different port (such as 6081) and have the SSL proxy forward to that port, or to have Varnish bind only to localhost. The cleaner approach is to bind Varnish to localhost only:
Update the Varnish systemd override:
sudo systemctl edit varnish
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
-a 127.0.0.1:6081 \
-f /etc/varnish/default.vcl \
-s malloc,1G
Then update the Nginx SSL proxy to forward to port 6081:
proxy_pass http://127.0.0.1:6081;
And configure a separate Nginx server block on port 80 for HTTP-to-HTTPS redirects. The final flow becomes:
Client :443 → Nginx (SSL termination) → Varnish :6081 → Nginx :8080 → PHP-FPM
Reload both services:
sudo systemctl daemon-reload
sudo systemctl restart varnish
sudo nginx -t && sudo systemctl reload nginx
Add the X-Forwarded-Proto header awareness to your VCL so that Varnish can differentiate HTTPS requests if needed:
sub vcl_recv {
# Normalize the X-Forwarded-Proto header
if (req.http.X-Forwarded-Proto == "https") {
set req.http.X-Scheme = "https";
}
# ... rest of vcl_recv
}
Cache Invalidation: Purge, Ban, and Tag-Based Strategies
Caching is only useful if you can invalidate stale content reliably. Varnish provides three primary invalidation mechanisms.
Purge: Remove a Single URL
Purging removes a specific cached object. Add a purge handler to your VCL:
acl purge {
"localhost";
"127.0.0.1";
}
sub vcl_recv {
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return (synth(405, "Purge not allowed from this IP"));
}
return (purge);
}
# ... rest of vcl_recv
}
Then purge specific URLs from the command line:
curl -X PURGE http://localhost/page-to-purge/
Ban: Pattern-Based Invalidation
Bans allow you to invalidate cached objects matching a pattern without removing them immediately. The ban is evaluated lazily when objects are next requested. This is useful for invalidating entire sections of a site:
sub vcl_recv {
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return (synth(405, "Ban not allowed from this IP"));
}
ban("req.url ~ " + req.http.X-Ban-URL);
return (synth(200, "Banned"));
}
}
Invalidate all blog posts:
curl -X BAN -H "X-Ban-URL: ^/blog/" http://localhost/
Invalidate all CSS and JS files after a deployment:
curl -X BAN -H "X-Ban-URL: \.(css|js)$" http://localhost/
Tag-Based Invalidation
For sophisticated setups, you can tag cached responses with identifiers and then ban by tag. Your application adds a custom header to responses (for example, X-Cache-Tags: post-123,category-news,author-5), and Varnish stores these tags. When a post is updated, you ban all objects tagged with that post's ID:
# In vcl_recv
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return (synth(405, "Not allowed"));
}
ban("obj.http.X-Cache-Tags ~ " + req.http.X-Cache-Tags);
return (synth(200, "Banned by tags"));
}
# In vcl_deliver — hide internal tags from clients
unset resp.http.X-Cache-Tags;
For WordPress, plugins like Varnish HTTP Purge or Proxy Cache Purge automate purging when content is published, updated, or commented on.
Monitoring Varnish: varnishstat and Hit Ratio
Monitoring is essential for verifying that Varnish is working effectively and for identifying configuration problems. The primary tool is varnishstat, which displays real-time statistics.
varnishstat
This launches a live dashboard showing requests per second, cache hit rate, backend connections, memory usage, and dozens of other metrics. The most critical metric is your hit ratio — the percentage of requests served from cache versus those forwarded to the backend.
To calculate your hit ratio from the command line:
varnishstat -1 | grep -E "MAIN.cache_hit|MAIN.cache_miss"
A healthy Varnish installation should maintain a hit ratio above 80% for content-heavy sites. Well-tuned WordPress configurations with aggressive static asset caching routinely achieve 95% or higher. If your hit ratio is below 70%, investigate your VCL — you may be passing too many requests due to cookies, or your TTLs may be too short.
Additional useful monitoring commands:
# View real-time request log
varnishlog
# View specific request details
varnishlog -q "ReqURL eq '/'"
# View backend health status
varnishadm backend.list
# View current ban list
varnishadm ban.list
The varnishncsa tool generates Apache/NCSA combined log format output, which you can pipe to log analysis tools or write to a log file for integration with monitoring systems.
Memory Allocation: Sizing Your Varnish Cache
The -s malloc,SIZE parameter controls how much RAM Varnish allocates for cached objects. Proper sizing depends on your working set — the total size of unique pages and assets that receive regular traffic.
Start by estimating your cacheable content. A typical WordPress site with 1,000 pages averaging 50 KB each has a working set of about 50 MB. Add static assets (CSS, JS, images) that Varnish also caches, and you might need 200 to 500 MB. For sites with tens of thousands of pages or large image-heavy content, you may need several gigabytes.
General guidelines for memory allocation:
- Small sites (under 1,000 pages): 256 MB to 512 MB
- Medium sites (1,000 to 10,000 pages): 512 MB to 1 GB
- Large sites (10,000+ pages or image-heavy): 1 GB to 4 GB
- High-traffic media sites: 4 GB to 8 GB or more
Monitor cache evictions with varnishstat — the n_lru_nuked counter shows how many objects were evicted to make room for new ones. If this counter climbs steadily, your cache is too small and you are losing potential hits. Increase the allocation to reduce evictions.
Remember that Varnish's memory allocation is in addition to your system's other memory needs. On a 4 GB VPS running Nginx, PHP-FPM, and MySQL, allocating 1 GB to Varnish is a reasonable starting point. The operating system, Nginx, PHP-FPM workers, and MySQL buffer pool share the remaining 3 GB. If you find that 1 GB is insufficient and you need more cache space, you may need a server with more RAM.
MassiveGRID VPS plans allow you to scale RAM independently of other resources, so you can increase memory allocation as your caching needs grow — starting from a 4 GB plan to give Varnish 1 GB while maintaining comfortable headroom for the rest of your stack. For sites that scale past 500 concurrent users or need to allocate 2 GB or more to Varnish, consider a dedicated VPS (VDS) where RAM is physically reserved for your instance rather than shared. Dedicated memory allocation eliminates the possibility of noisy neighbor effects impacting your cache performance, ensuring a consistent 95%+ hit ratio even under sustained high traffic.
Performance Tuning Tips
Beyond the core configuration, several optimizations can squeeze additional performance from your Varnish deployment.
Thread pool tuning: Varnish uses a pool of worker threads to handle requests. The defaults are conservative. For high-traffic servers, increase the minimums:
-p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_timeout=300
Add these to your systemd override's ExecStart line.
Grace mode: Configure generous grace periods. When an object expires, Varnish can serve the stale version while fetching a fresh copy from the backend. This prevents the "thundering herd" problem where hundreds of simultaneous requests for an expired object all hit the backend at once:
sub vcl_backend_response {
set beresp.grace = 24h;
}
Query string sorting: Normalize query strings so that ?a=1&b=2 and ?b=2&a=1 are treated as the same cache key. Import the std vmod and use std.querysort():
import std;
sub vcl_recv {
set req.url = std.querysort(req.url);
}
Strip marketing query parameters: Remove UTM and tracking parameters that create unnecessary cache variations:
if (req.url ~ "(\?|&)(utm_|gclid|fbclid|mc_)") {
set req.url = regsuball(req.url, "(utm_[a-z_]*|gclid|fbclid|mc_[a-z_]*)=[^&]*&?", "");
set req.url = regsub(req.url, "[\?&]+$", "");
}
Troubleshooting Common Issues
All requests show MISS: Check if cookies are preventing caching. Use varnishlog -g request -q "ReqURL eq '/'" to trace a specific request and see why Varnish decided not to cache it. The most common cause is cookies not being stripped in vcl_recv.
Backend fetch failures: Verify Nginx is actually listening on the backend port with curl -I http://127.0.0.1:8080. Check Varnish's backend health with varnishadm backend.list. If the backend is marked as sick, review the health probe configuration.
Varnish won't start: Check for VCL syntax errors with varnishd -C -f /etc/varnish/default.vcl. This compiles the VCL without starting Varnish, showing any syntax errors. Also check journalctl -u varnish for systemd-level errors.
Mixed content or redirect loops with SSL: Ensure your backend application knows it is behind an HTTPS proxy. The X-Forwarded-Proto header must be set by the SSL-terminating Nginx, and your application should use this header to generate HTTPS URLs. In WordPress, add to wp-config.php:
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
Verifying Everything Works
After completing the setup, run through this verification checklist:
# Check Varnish is running and listening
sudo systemctl status varnish
sudo ss -tlnp | grep varnish
# Check backend connectivity
varnishadm backend.list
# Test a page and check for cache headers
curl -sI https://your-domain.com/ | grep -i "x-cache"
# First request should show MISS, second should show HIT
curl -sI https://your-domain.com/ | grep -i "x-cache"
# Verify static assets are cached
curl -sI https://your-domain.com/wp-content/themes/your-theme/style.css | grep -i "x-cache"
# Check overall hit ratio
varnishstat -1 | grep "cache_hit\|cache_miss"
Once you see consistent HITs on repeat requests and your hit ratio climbs above 80%, your Varnish installation is working correctly. Monitor the hit ratio over the first few days and adjust your VCL if you notice request patterns that should be cached but are not.
Prefer Managed Caching?
VCL tuning is a specialized skill. Writing effective cache rules requires understanding HTTP semantics, cookie behavior, and application-specific caching requirements. A misconfigured VCL can serve stale content to logged-in users, break form submissions, or cache error pages — all of which are worse than not caching at all. For production sites where caching mistakes have real business consequences, professional management provides both expertise and accountability.
MassiveGRID's fully managed hosting includes Varnish configuration and tuning as part of the service. The operations team configures the complete caching stack — Varnish VCL rules tailored to your application, SSL sandwich architecture, cache invalidation hooks, and ongoing monitoring of hit ratios and performance metrics. You get the full performance benefit of Varnish without needing to maintain VCL expertise on your team. Combined with the managed stack's Proxmox HA clustering and 24/7 support rated 9.5 out of 10, it is the fastest path from slow page loads to sub-second response times at scale.