Nginx and Apache are the two web servers that power the vast majority of the internet. Both are free, open-source, battle-tested, and fully supported on Ubuntu 24.04. But they have fundamentally different architectures that lead to different performance characteristics — and on a VPS with limited resources, those differences matter. Choosing the right web server can mean the difference between a site that handles traffic surges gracefully and one that runs out of memory under load.

This guide compares Nginx and Apache across every dimension that matters for VPS hosting: memory usage, static file performance, dynamic content handling, configuration style, module ecosystems, and real-world use cases. By the end, you'll know exactly which one to use for your specific workload.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

Architecture: The Fundamental Difference

The architectural difference between Nginx and Apache explains nearly every performance difference you'll observe. Understanding it is essential before comparing anything else.

Apache: Process/Thread-Based

Apache uses a multi-processing module (MPM) to handle connections. On Ubuntu 24.04, the default MPM is mpm_event (a threaded model), though the older mpm_prefork is still used when mod_php is needed. With prefork, each incoming connection gets its own process. With event, Apache uses threads within a pool of processes.

# Check which MPM Apache is using
apachectl -M | grep mpm

# Typical output:
# mpm_event_module (shared)

# Apache's worker configuration (/etc/apache2/mods-enabled/mpm_event.conf)
<IfModule mpm_event_module>
    StartServers             2
    MinSpareThreads         25
    MaxSpareThreads         75
    ThreadLimit             64
    ThreadsPerChild         25
    MaxRequestWorkers      150
    MaxConnectionsPerChild   0
</IfModule>

The key limitation: MaxRequestWorkers defines the maximum number of simultaneous connections Apache can handle. Each worker consumes memory whether it's actively processing a request or waiting for a slow client.

Nginx: Event-Driven

Nginx uses an asynchronous, event-driven architecture. A small number of worker processes (typically one per CPU core) handle thousands of connections simultaneously using an event loop. Instead of dedicating a process or thread to each connection, Nginx processes events as they become ready.

# Nginx worker configuration (/etc/nginx/nginx.conf)
worker_processes auto;  # One worker per CPU core
worker_connections 1024; # Each worker handles up to 1024 connections

# Total max connections = worker_processes * worker_connections
# With 4 CPU cores: 4 * 1024 = 4,096 simultaneous connections

This architecture means Nginx's memory usage stays relatively constant regardless of the number of connections, while Apache's grows linearly with connections.

What This Means in Practice

Imagine 500 users simultaneously downloading a large file from your server. With Apache's prefork MPM, that's 500 separate processes, each consuming memory. With Nginx, those 500 connections are handled by 4-8 worker processes using event notifications — the OS tells Nginx when data is ready to send, and Nginx handles it without blocking. The result is dramatically different resource consumption under identical loads.

This architectural difference also affects how each server handles slow clients. When a user on a poor mobile connection takes 30 seconds to receive a response, Apache's prefork model keeps an entire process occupied for those 30 seconds. Nginx registers the connection, sends data as the client can receive it, and uses the same worker process to handle other requests in between. On a VPS where you might have hundreds of mobile users on varying connection speeds, this difference compounds significantly.

Memory Usage: Critical for VPS

On a MassiveGRID Cloud VPS with 2GB RAM, Nginx typically consumes 30-50MB for static content, while Apache may use 200-400MB. This difference is not trivial when you're running a database, application runtime, and web server on the same machine.

Scenario Nginx Memory Apache Memory (event MPM) Apache Memory (prefork MPM)
Idle (no connections) 5-15 MB 40-80 MB 60-120 MB
100 concurrent connections 15-30 MB 100-200 MB 200-400 MB
500 concurrent connections 25-50 MB 200-400 MB 500+ MB
1000 concurrent connections 30-60 MB 400-600 MB 1+ GB

These numbers reflect the web server itself. Both servers still need PHP-FPM (or similar) for dynamic content, which adds its own memory overhead regardless of the web server choice.

For a VPS with 1-2 GB RAM running WordPress or a Laravel app, Nginx's lower memory footprint leaves more room for MySQL and PHP-FPM — which is usually where you want the memory.

Static File Serving Performance

For serving static files (HTML, CSS, JavaScript, images), Nginx is significantly faster than Apache. Its event-driven architecture was designed from the ground up for this use case.

# Nginx static file serving configuration
server {
    listen 80;
    server_name example.com;
    root /var/www/example.com/public;

    location ~* \.(css|js|jpg|jpeg|png|gif|ico|woff2|svg)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;  # Don't log static file requests
    }
}
# Apache static file serving configuration
<VirtualHost *:80>
    ServerName example.com
    DocumentRoot /var/www/example.com/public

    <LocationMatch "\.(css|js|jpg|jpeg|png|gif|ico|woff2|svg)$">
        ExpiresActive On
        ExpiresDefault "access plus 30 days"
        Header set Cache-Control "public, immutable"
        SetEnv nolog
    </LocationMatch>
</VirtualHost>

Under high concurrency, Nginx can serve 2-3x more static file requests per second than Apache with the same hardware, while using a fraction of the memory. This advantage is most pronounced when many clients are requesting files simultaneously.

Benchmarking Static File Performance

You can measure the difference yourself using ab (Apache Bench) or wrk. Here's a simple benchmark you can run against either server:

# Install benchmarking tools
sudo apt install apache2-utils -y

# Test with 100 concurrent connections, 10,000 total requests
ab -n 10000 -c 100 http://localhost/style.css

# Key metrics to compare:
# - Requests per second (higher is better)
# - Time per request (lower is better)
# - Transfer rate (higher is better)

# For more realistic testing, use wrk:
# wrk -t4 -c100 -d30s http://localhost/style.css

On a 2-core VPS serving a 50KB CSS file, typical results look like this:

Metric Nginx Apache (event MPM)
Requests/second (100 concurrent) 12,000-15,000 5,000-8,000
Memory during test 25-35 MB 150-250 MB
CPU usage during test 40-60% 70-90%

For applications where most content is dynamic (WordPress, Laravel), these static file numbers matter less. But for sites with heavy static asset delivery — portfolios, documentation sites, SPAs — Nginx's advantage is substantial.

Dynamic Content: PHP-FPM Integration

For dynamic content (PHP, Python, Node.js), both web servers delegate processing to an external application server. The web server's role is to receive the HTTP request, pass it to the application, and return the response. Both support PHP-FPM equally well.

Nginx with PHP-FPM

# /etc/nginx/sites-available/example.com
server {
    listen 80;
    server_name example.com;
    root /var/www/example.com/public;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }
}

Apache with PHP-FPM

# /etc/apache2/sites-available/example.com.conf
<VirtualHost *:80>
    ServerName example.com
    DocumentRoot /var/www/example.com/public

    <Directory /var/www/example.com/public>
        AllowOverride All
        Require all granted
    </Directory>

    <FilesMatch \.php$>
        SetHandler "proxy:unix:/run/php/php8.3-fpm.sock|fcgi://localhost"
    </FilesMatch>
</VirtualHost>

When serving dynamic content through PHP-FPM, the performance difference between Nginx and Apache narrows significantly. PHP-FPM does the heavy lifting, and the web server is mostly passing data back and forth. The remaining difference is in how efficiently each server handles the HTTP connection to the client — Nginx still uses less memory per connection.

If you're setting up a LEMP stack (Linux, Nginx, MySQL, PHP), follow our complete LEMP stack guide.

.htaccess vs Centralized Configuration

This is one of the most practical differences between the two servers for day-to-day operations.

Apache: .htaccess (Per-Directory Configuration)

Apache supports .htaccess files — configuration files placed in any directory that modify the behavior for that directory and its subdirectories. This is incredibly convenient for shared hosting environments and applications like WordPress that ship with their own .htaccess rules.

# .htaccess example (placed in the site's root directory)
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [L]

# Deny access to sensitive files
<FilesMatch "\.(env|log|yml)$">
    Require all denied
</FilesMatch>

# Set PHP values
php_value upload_max_filesize 64M
php_value post_max_size 64M

The trade-off: Apache checks for .htaccess files in every directory in the path for every request. On a deep directory structure, this means multiple disk reads per request. You can mitigate this with AllowOverride None in directories that don't need .htaccess.

Nginx: Centralized Configuration Only

Nginx does not support .htaccess. All configuration lives in the main config files under /etc/nginx/. Changing any configuration requires editing the config file and reloading Nginx.

# Equivalent Nginx configuration (in the server block)
location / {
    try_files $uri $uri/ /index.php?$query_string;
}

location ~ \.(env|log|yml)$ {
    deny all;
    return 404;
}

# PHP values must be set in php.ini or pool config, not in Nginx

The advantage of Nginx's approach: no per-request filesystem lookups for configuration, which means slightly better performance. The disadvantage: every configuration change requires SSH access and a reload, making it less convenient for applications that expect .htaccess support.

Aspect Apache (.htaccess) Nginx (centralized)
Configuration location Any directory /etc/nginx/ only
Change takes effect Immediately (no reload) After nginx reload
Performance impact Slight overhead per request None
Application compatibility WordPress, Drupal, Laravel ship with .htaccess Requires manual translation
Security risk Users can override server config Only admins can change config

Module Ecosystem

Both servers support modules, but the mechanism is different.

Apache supports loading modules dynamically at runtime. You can enable and disable modules without recompiling:

# List available modules
apache2ctl -M

# Enable a module
sudo a2enmod rewrite
sudo a2enmod headers
sudo a2enmod proxy_fcgi
sudo systemctl restart apache2

# Disable a module
sudo a2dismod autoindex
sudo systemctl restart apache2

Nginx modules are typically compiled into the binary. Ubuntu's Nginx package includes the most common modules, but adding a new module often requires installing a different Nginx package or compiling from source:

# List compiled-in modules
nginx -V 2>&1 | tr ' ' '\n' | grep module

# Install additional modules via Ubuntu packages
sudo apt install libnginx-mod-http-geoip2
sudo apt install libnginx-mod-http-headers-more-filter
sudo systemctl restart nginx

Apache's dynamic module system is more flexible for experimentation. Nginx's compiled-in approach is more performant since there's no module loading overhead.

Commonly Needed Modules

Functionality Apache Module Nginx Equivalent
URL rewriting mod_rewrite Built-in (try_files, rewrite)
Gzip compression mod_deflate ngx_http_gzip_module (built-in)
SSL/TLS mod_ssl ngx_http_ssl_module (built-in)
HTTP/2 mod_http2 ngx_http_v2_module (built-in)
Reverse proxy mod_proxy + mod_proxy_http ngx_http_proxy_module (built-in)
Rate limiting mod_ratelimit ngx_http_limit_req_module (built-in)
GeoIP mod_geoip libnginx-mod-http-geoip2 (package)
WebDAV mod_dav ngx_http_dav_module (built-in, limited)

For most VPS use cases, Nginx's built-in modules cover everything you need. The scenarios where Apache's module ecosystem matters tend to involve legacy applications or niche features like mod_security (web application firewall), which has both Apache and Nginx versions but has historically been more mature on Apache.

SSL/TLS Performance

Both servers handle SSL/TLS well, but Nginx has a slight edge in TLS handshake performance due to its event-driven architecture. Under high connection rates (many new HTTPS connections per second), Nginx completes TLS handshakes faster because it doesn't need to allocate a new process or thread for each handshake.

The practical difference is small for most VPS workloads. Both servers support HTTP/2 and modern TLS configurations:

# Nginx SSL configuration
server {
    listen 443 ssl http2;
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
}
# Apache SSL configuration
<VirtualHost *:443>
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
    Protocols h2 http/1.1
    SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
    SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
    SSLHonorCipherOrder off
    SSLSessionCache "shmcb:logs/ssl_scache(512000)"
    SSLSessionTickets off
</VirtualHost>

Use Case: WordPress

Recommendation: Nginx

WordPress works with both servers, but Nginx is the better choice for VPS hosting. WordPress relies heavily on PHP-FPM for dynamic content (both servers handle this equally), but WordPress sites also serve many static assets (theme CSS/JS, uploaded images, plugin files). Nginx's superior static file handling and lower memory footprint make it the stronger choice.

The main consideration: WordPress and many WordPress plugins ship with .htaccess rules for URL rewriting, caching, and security. With Nginx, you need to translate these rules to Nginx configuration format. For WordPress's permalink structure, this is straightforward:

# Nginx WordPress configuration
server {
    listen 80;
    server_name example.com;
    root /var/www/wordpress;
    index index.php;

    # WordPress permalink support (replaces .htaccess rewrite rules)
    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }

    # Deny access to sensitive files
    location ~ /\.(ht|git|env) {
        deny all;
    }

    location = /wp-login.php {
        limit_req zone=login burst=3 nodelay;
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }
}

For hosting multiple WordPress sites on a single VPS, see our multi-site WordPress guide.

Use Case: Laravel / Django

Recommendation: Nginx

Modern frameworks like Laravel (PHP) and Django (Python) are designed to work behind a reverse proxy. They don't use .htaccess for routing — routing is handled by the framework itself. Nginx's strengths (low memory, fast static files, efficient proxying) align perfectly with this architecture.

# Nginx configuration for Laravel
server {
    listen 80;
    server_name app.example.com;
    root /var/www/laravel/public;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }
}
# Nginx configuration for Django (with Gunicorn)
server {
    listen 80;
    server_name app.example.com;

    location /static/ {
        alias /var/www/django/staticfiles/;
        expires 30d;
    }

    location /media/ {
        alias /var/www/django/media/;
        expires 30d;
    }

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

For deploying Python applications, see our Gunicorn + Nginx deployment guide.

Use Case: Multiple Sites on One VPS

Recommendation: Nginx

When hosting multiple sites on a single VPS, Nginx's lower per-connection memory usage becomes a significant advantage. Each additional site adds minimal memory overhead with Nginx, while Apache's memory usage scales with the total number of concurrent connections across all sites.

Nginx also excels as a reverse proxy, which is the natural architecture when hosting multiple sites with different backends — one site running PHP, another running Node.js, a third running a Python application. Nginx routes requests to the correct backend based on the domain name:

# Multiple sites, different backends, one Nginx instance
# /etc/nginx/sites-available/wordpress.example.com
server {
    listen 80;
    server_name wordpress.example.com;
    root /var/www/wordpress;

    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }
}

# /etc/nginx/sites-available/api.example.com
server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;  # Node.js app
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

# /etc/nginx/sites-available/dashboard.example.com
server {
    listen 80;
    server_name dashboard.example.com;

    location / {
        proxy_pass http://127.0.0.1:8000;  # Gunicorn/Django
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

All three sites share one Nginx process, using a combined 30-50 MB of memory for the web server layer. For details on Nginx reverse proxy configurations for multiple sites, see our Nginx reverse proxy guide.

The Hybrid Approach: Nginx in Front of Apache

If you have a strong reason to use Apache (legacy application that requires .htaccess, specific Apache modules with no Nginx equivalent), you can run both: Nginx as a reverse proxy in front of Apache. This gives you Nginx's efficient connection handling and static file serving with Apache's .htaccess support and module ecosystem.

# Nginx as reverse proxy to Apache
# /etc/nginx/sites-available/example.com
server {
    listen 80;
    server_name example.com;

    # Serve static files directly with Nginx
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|woff2|svg)$ {
        root /var/www/example.com/public;
        expires 30d;
        access_log off;
    }

    # Proxy everything else to Apache on port 8080
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
# Apache listens on 8080 (internal only)
# /etc/apache2/ports.conf
Listen 127.0.0.1:8080

# /etc/apache2/sites-available/example.com.conf
<VirtualHost 127.0.0.1:8080>
    ServerName example.com
    DocumentRoot /var/www/example.com/public

    <Directory /var/www/example.com/public>
        AllowOverride All
        Require all granted
    </Directory>
</VirtualHost>

This hybrid approach adds complexity and uses more resources than either server alone. Use it only when you have a specific need for Apache features that can't be replicated in Nginx.

Both Work Identically on MassiveGRID

Both Nginx and Apache install and run identically on a MassiveGRID Cloud VPS. There are no restrictions on which web server you use, and both can be installed in under a minute:

# Install Nginx
sudo apt update && sudo apt install nginx -y

# OR install Apache
sudo apt update && sudo apt install apache2 -y

At high concurrency levels (1000+ simultaneous connections), both servers benefit from dedicated CPU resources to handle TLS handshakes and request processing without contention.

Decision Framework

Use this table to make your final decision:

Factor Choose Nginx Choose Apache
VPS RAM 1-4 GB (memory matters) 8+ GB (memory is plentiful)
Primary workload Static files, reverse proxy .htaccess-dependent apps
Application type Node.js, Python, Go, Laravel, modern PHP Legacy PHP apps, cPanel sites
Configuration preference Centralized, version-controlled Per-directory, distributed
Traffic pattern High concurrency, many connections Low-medium concurrency
Team experience Modern web deployment Traditional LAMP stack
Module needs Standard modules suffice Need specific Apache modules

The short answer: If you're starting a new project on a VPS, choose Nginx. It uses less memory, serves static files faster, and works as an excellent reverse proxy for any backend. Choose Apache only if you have a specific requirement that Nginx can't meet — typically .htaccess support for a legacy application or a specific Apache module with no Nginx equivalent.

Logging Differences

Both servers produce access logs and error logs, but the configuration and default behavior differ. Understanding these differences matters for troubleshooting. For comprehensive log analysis, see our server logs troubleshooting guide.

Nginx Logging

# Default log locations
/var/log/nginx/access.log
/var/log/nginx/error.log

# Custom log format (add to http block in nginx.conf)
log_format detailed '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

# Use custom format per site
access_log /var/log/nginx/example.com.access.log detailed;

# Disable access logging for static files (reduces I/O)
location ~* \.(css|js|jpg|png|gif|ico)$ {
    access_log off;
}

Apache Logging

# Default log locations
/var/log/apache2/access.log
/var/log/apache2/error.log

# Custom log format
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %D" detailed
CustomLog /var/log/apache2/example.com.access.log detailed

# %D gives request processing time in microseconds
# Nginx uses $request_time in seconds with millisecond precision

Nginx's $upstream_response_time variable is particularly useful — it shows how long the backend (PHP-FPM, Gunicorn) took to process the request, separate from Nginx's own processing time. Apache can achieve similar results with mod_log_config but it requires more configuration.

Installation and Switching

If you're currently running one server and want to switch to the other, the process is straightforward on Ubuntu. The most important step is ensuring both aren't trying to listen on the same port at the same time.

# Switch from Apache to Nginx
sudo systemctl stop apache2
sudo systemctl disable apache2
sudo apt install nginx -y

# Convert your Apache virtual hosts to Nginx server blocks
# Then test and enable them:
sudo nginx -t
sudo systemctl start nginx

# Switch from Nginx to Apache
sudo systemctl stop nginx
sudo systemctl disable nginx
sudo apt install apache2 -y

# Convert your Nginx server blocks to Apache virtual hosts
# Then test and enable:
sudo apachectl configtest
sudo systemctl start apache2

Before switching: Document your current configuration. Export your virtual host / server block files, note any custom modules or settings, and test the new configuration on a staging server before making changes to production. If you need a quick test environment, deploy a minimal Cloud VPS to validate your converted configuration.

Whichever server you choose, the fundamentals of VPS setup, security hardening, and performance optimization remain the same. The web server is one piece of your stack — get the foundation right first.