TL;DR — Quick Summary
Configure Nginx as reverse proxy and load balancer with SSL. Covers proxy_pass, Let's Encrypt, balancing algorithms, WebSocket, caching, and rate limiting.
Nginx powers over 34% of the world’s busiest websites — and a large share of that traffic flows through it not as a web server but as a reverse proxy and load balancer. Configuring Nginx as a reverse proxy with SSL termination gives you a single, hardened entry point that distributes load across backend servers, offloads TLS from your application, and adds caching, rate limiting, and security headers in one place. This guide covers every layer of that stack: reverse proxy fundamentals, SSL termination with Let’s Encrypt, all four load balancing algorithms, WebSocket proxying, response caching, rate limiting, security headers, HTTP/2 and HTTP/3, monitoring, and a real-world Node.js cluster scenario.
Prerequisites
Before you begin, make sure you have:
- Ubuntu 22.04 or 24.04 (commands apply to any Debian-based distro)
- Nginx 1.18 or later (
sudo apt install nginx) - A registered domain pointing to your server’s public IP
- Terminal access with sudo privileges
- At least two backend application servers or processes (for load balancing)
Reverse Proxy Basics
A reverse proxy sits between the internet and one or more backend servers. Clients connect to Nginx; Nginx forwards the request to the appropriate backend and returns the response. The backend never needs a public IP address.
The core directive is proxy_pass:
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
# Forward the real client IP to the backend
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Tell the backend which host the client requested
proxy_set_header Host $host;
# Tell the backend whether the client used HTTP or HTTPS
proxy_set_header X-Forwarded-Proto $scheme;
# Preserve the real IP at the outermost proxy level
proxy_set_header X-Real-IP $remote_addr;
}
}
The four proxy_set_header directives are non-negotiable for any production setup. Without X-Forwarded-For, your application logs show only the Nginx server IP. Without X-Forwarded-Proto, your app cannot distinguish HTTP from HTTPS and may generate incorrect redirect loops.
Set reasonable timeouts so slow backends do not hold Nginx workers:
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 16k;
SSL Termination with Let’s Encrypt
SSL termination means Nginx handles the TLS handshake and forwards plain HTTP to backends on the internal network — no TLS overhead on individual application servers.
Install Certbot with the Nginx plugin:
sudo apt install certbot python3-certbot-nginx -y
Obtain and install a certificate (Certbot modifies your server block automatically):
sudo certbot --nginx -d app.example.com -d www.app.example.com
Certbot creates a systemd timer that renews certificates automatically before expiry. Verify it:
sudo systemctl status certbot.timer
sudo certbot renew --dry-run
After Certbot runs, your server block looks similar to this:
server {
listen 443 ssl;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
The include /etc/letsencrypt/options-ssl-nginx.conf file sets strong cipher suites and disables SSLv3 and TLS 1.0/1.1. Do not remove it.
Load Balancing Algorithms
To load balance across multiple backends, replace the single proxy_pass target with an upstream block:
upstream app_cluster {
server 10.0.0.10:3000;
server 10.0.0.11:3000;
server 10.0.0.12:3000;
}
server {
listen 443 ssl;
server_name app.example.com;
location / {
proxy_pass http://app_cluster;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Round-Robin (Default)
No directive needed. Nginx distributes requests sequentially across all upstream servers. Add weights to direct more traffic to faster or more capable nodes:
upstream app_cluster {
server 10.0.0.10:3000 weight=3;
server 10.0.0.11:3000 weight=1;
server 10.0.0.12:3000 weight=1;
}
Server .10 receives three out of every five requests.
Least Connections (least_conn)
Nginx routes each new request to the upstream server with the fewest active connections. Best for workloads with variable response times (API servers, database-backed apps):
upstream app_cluster {
least_conn;
server 10.0.0.10:3000;
server 10.0.0.11:3000;
server 10.0.0.12:3000;
}
IP Hash (ip_hash)
The client IP determines which backend serves all requests from that client. Useful for session persistence without a shared session store. Note that it can create uneven distribution if many clients share a NAT address:
upstream app_cluster {
ip_hash;
server 10.0.0.10:3000;
server 10.0.0.11:3000;
server 10.0.0.12:3000;
}
Random
Nginx selects a backend at random. With the optional two parameter, it picks two servers at random and routes to the one with fewer connections — combining the randomness of random selection with the fairness of least-connections:
upstream app_cluster {
random two least_conn;
server 10.0.0.10:3000;
server 10.0.0.11:3000;
server 10.0.0.12:3000;
}
The random two variant is recommended over plain random for production use.
Upstream Health Checks
Mark servers as backup or set failure thresholds so Nginx stops sending traffic to unhealthy backends:
upstream app_cluster {
least_conn;
server 10.0.0.10:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.11:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.12:3000 backup;
}
max_fails=3— after 3 consecutive failures, mark the server as unavailablefail_timeout=30s— how long to consider the server unavailable, and the window in whichmax_failsfailures must occurbackup— only used when all primary servers are unavailable
Nginx Plus (the commercial version) supports active health checks via health_check;. With open-source Nginx, passive health checks (based on failed responses) are the only option without third-party modules.
WebSocket Proxying
WebSocket connections require an HTTP/1.1 upgrade handshake. Add three directives to any location block that proxies WebSocket traffic:
location /ws/ {
proxy_pass http://app_cluster;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Keep the connection alive during long idle periods
proxy_read_timeout 3600s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
If you use ip_hash in the upstream block, WebSocket connections from the same client always reach the same backend — important for stateful WebSocket servers that do not share state across instances.
Caching with proxy_cache
Nginx can cache upstream responses in memory or on disk, drastically reducing backend load for cacheable endpoints.
Define a cache zone in the http context (typically in /etc/nginx/nginx.conf):
http {
proxy_cache_path /var/cache/nginx
levels=1:2
keys_zone=app_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
}
Enable caching in the server or location block:
location /api/public/ {
proxy_pass http://app_cluster;
proxy_cache app_cache;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
}
The X-Cache-Status header lets you verify cache hits (HIT), misses (MISS), and bypass reasons in curl or browser dev tools. proxy_cache_use_stale serves a stale cached response when the backend is unavailable — a valuable resilience pattern.
Cache bypass for authenticated requests:
proxy_cache_bypass $http_authorization;
proxy_no_cache $http_authorization;
Rate Limiting
Rate limiting protects backends from traffic spikes and brute-force attacks. Define a shared memory zone in the http context:
http {
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=30r/m;
}
Apply the zone to a location:
location /api/ {
limit_req zone=api_limit burst=10 nodelay;
limit_req_status 429;
proxy_pass http://app_cluster;
}
rate=30r/m— 30 requests per minute per IP (0.5 r/s)burst=10— allow up to 10 extra requests beyond the rate before rejectingnodelay— process burst requests immediately rather than delaying themlimit_req_status 429— return HTTP 429 (Too Many Requests) instead of the default 503
For stricter limits on login endpoints:
http {
limit_req_zone $binary_remote_addr zone=login_limit:5m rate=5r/m;
}
location /auth/login {
limit_req zone=login_limit burst=3 nodelay;
limit_req_status 429;
proxy_pass http://app_cluster;
}
Security Headers
Add security headers in the server block. They apply to all responses Nginx sends, including proxied ones:
server {
# Prevent clickjacking
add_header X-Frame-Options "SAMEORIGIN" always;
# Enable browser XSS filter (legacy, but harmless)
add_header X-XSS-Protection "1; mode=block" always;
# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Control referrer information
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Restrict browser features
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# HTTP Strict Transport Security (6-month max-age, include subdomains)
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
# Content Security Policy (adjust to your app's needs)
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline';" always;
# Hide Nginx version number
server_tokens off;
}
The always parameter ensures headers are added to error responses, not just 200 OK.
HTTP/2 and HTTP/3
HTTP/2
Enable HTTP/2 on the listen directive:
listen 443 ssl http2;
HTTP/2 requires HTTPS. It is supported by all modern browsers and eliminates head-of-line blocking through multiplexing, reducing page load times significantly.
HTTP/3 (QUIC)
HTTP/3 is available in Nginx 1.25 and later (compiled with QUIC support). Check your version:
nginx -V 2>&1 | grep -o with-http_v3
If supported, enable it:
server {
listen 443 ssl http2;
listen 443 quic reuseport;
add_header Alt-Svc 'h3=":443"; ma=86400' always;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
}
The Alt-Svc header advertises HTTP/3 availability to browsers, which will use QUIC on subsequent visits.
Monitoring with stub_status
Enable the built-in stub_status module to expose real-time connection metrics:
server {
listen 127.0.0.1:8080;
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}
Access the status page:
curl http://127.0.0.1:8080/nginx_status
Output:
Active connections: 42
server accepts handled requests
18385 18385 47210
Reading: 3 Writing: 12 Waiting: 27
- Active connections — current open connections
- Reading / Writing / Waiting — connections in each phase (header read, response write, keep-alive wait)
- Accepts / Handled — total accepted vs. handled (should always match; a gap indicates worker limit issues)
For production monitoring, feed these metrics to Prometheus using nginx-prometheus-exporter or route them to Datadog, Grafana, or your preferred observability platform.
Nginx vs. Alternatives: Load Balancer Comparison
| Feature | Nginx | HAProxy | Traefik | Caddy |
|---|---|---|---|---|
| Protocol support | HTTP, HTTPS, TCP, UDP | HTTP, HTTPS, TCP | HTTP, HTTPS, TCP, gRPC | HTTP, HTTPS |
| Load balancing algorithms | RR, LC, IP hash, random | RR, LC, source, URI | RR, WRR | RR, LC |
| Active health checks | Nginx Plus only | Yes (built-in) | Yes (built-in) | Yes (built-in) |
| Auto SSL (ACME) | Via Certbot | Via Certbot | Built-in | Built-in |
| Dynamic config reload | Yes (graceful) | Yes (runtime API) | Yes (watch configs) | Yes |
| WebSocket support | Yes | Yes | Yes | Yes |
| HTTP/3 support | 1.25+ (experimental) | No | No | Yes |
| Built-in caching | Yes | No | No | No |
| Resource footprint | Low | Very low | Medium | Low |
| Best for | Web + proxy + cache | Pure TCP/HTTP LB | Kubernetes ingress | Simple auto-SSL |
Nginx is the best choice when you need caching, rate limiting, and SSL termination in a single binary. HAProxy wins for raw TCP performance and fine-grained health checks. Traefik excels in container and Kubernetes environments. Caddy is the easiest to configure when automatic HTTPS is the primary requirement.
Real-World Scenario: Load Balancing a Node.js Cluster Behind SSL
You have a production Node.js API running three PM2 processes on the same host (ports 3001, 3002, 3003). You want to expose them as api.example.com over HTTPS with session affinity, rate limiting on the authentication endpoint, and WebSocket support for real-time notifications.
Step 1: Create the upstream block
upstream node_api {
ip_hash;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
Step 2: Configure the server block
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=auth_limit:5m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=60r/m;
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
# Security headers
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
server_tokens off;
# General API traffic
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_req_status 429;
proxy_pass http://node_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
# Authentication endpoint — stricter rate limit
location /api/auth/ {
limit_req zone=auth_limit burst=3 nodelay;
limit_req_status 429;
proxy_pass http://node_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket notifications
location /ws/ {
proxy_pass http://node_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 3600s;
}
}
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
Step 3: Obtain the certificate and reload
sudo certbot --nginx -d api.example.com
sudo nginx -t && sudo systemctl reload nginx
Step 4: Verify load distribution
for i in {1..6}; do curl -s -o /dev/null -w "%{http_code}\n" https://api.example.com/api/health; done
All requests should return 200. Check which backend process serves each request by logging $upstream_addr in your access log format.
Gotchas and Edge Cases
Trailing slash in proxy_pass matters. proxy_pass http://backend; preserves the full URI including /api/ prefix. proxy_pass http://backend/; strips the location prefix. Inconsistency here causes 404s that are difficult to diagnose.
Do not use ip_hash with a CDN in front of Nginx. All CDN traffic appears to originate from a small set of CDN IP addresses, so ip_hash routes all your CDN traffic to one backend.
Long-polling and SSE require increased timeouts. Server-Sent Events (SSE) and long-polling connections are held open indefinitely. Set proxy_read_timeout to match your application’s heartbeat interval plus buffer.
proxy_buffering and streaming. If your backend streams a response (SSE, chunked transfer), set proxy_buffering off on that location. Otherwise Nginx buffers the full response before sending, defeating the purpose of streaming.
Certificate renewal and reload. Certbot’s systemd timer runs twice daily. After renewal, it calls nginx -s reload automatically via the --deploy-hook. Verify the hook is present:
sudo certbot renew --dry-run
X-Forwarded-For spoofing. If Nginx is not the first hop (e.g., behind a CDN), clients can inject fake X-Forwarded-For values. Use real_ip_module to trust only your CDN’s IP range:
set_real_ip_from 103.21.244.0/22; # Cloudflare example
real_ip_header X-Forwarded-For;
Troubleshooting
502 Bad Gateway. The backend is not running or not listening on the configured port. Check: sudo systemctl status your-app, ss -tlnp | grep 3000.
504 Gateway Timeout. The backend is running but responding too slowly. Increase proxy_read_timeout or investigate backend performance.
upstream timed out error in error.log. Often a sign of backend overload. Add more upstream servers or reduce request rate. Check sudo tail -f /var/log/nginx/error.log.
SSL certificate not renewing. Ensure port 80 is open for the ACME HTTP-01 challenge. The Certbot renewal hook requires Nginx to be running. Run sudo certbot renew --dry-run for a detailed error.
WebSocket connections drop after 60 seconds. Nginx’s default proxy_read_timeout is 60 seconds. Increase it for WebSocket locations.
Cache not working. Ensure proxy_cache_path is defined in the http context, the directory exists and is writable by the nginx user, and proxy_cache is set in the correct location block. Check X-Cache-Status headers in responses.
Summary
Nginx as a reverse proxy and load balancer gives you a production-grade traffic layer without additional infrastructure. Here are the key takeaways:
- Always set
X-Forwarded-For,X-Forwarded-Proto, andHostheaders in every proxy location - Use
certbot --nginxfor zero-downtime SSL certificate provisioning and automatic renewal - Choose
least_connfor API workloads,ip_hashfor session persistence, andrandom two least_connfor large upstream pools - Mark backends with
max_failsandfail_timeoutso Nginx removes unhealthy servers automatically - Use
proxy_http_version 1.1with Upgrade/Connection headers for WebSocket proxying - Enable
proxy_cachefor public endpoints andlimit_req_zoneto protect authentication routes - Add all seven security headers with the
alwaysflag so they appear on error responses too - Enable
http2on the listen directive to unlock multiplexing; evaluatequicon Nginx 1.25+ - Use
stub_statusfor real-time diagnostics and feed metrics to your observability stack