Nginx Reverse Proxy with SSL: Complete Configuration Guide

A reverse proxy sits between your users and your backend application servers, handling incoming requests and forwarding them to the appropriate upstream service. Nginx is one of the most widely deployed reverse proxies in production environments, prized for its low memory footprint, high concurrency handling, and straightforward configuration syntax.

In a typical architecture, your application — whether it runs on Node.js, Python, Java, or any other runtime — listens on a local port that is not directly exposed to the internet. Nginx accepts all public HTTP and HTTPS traffic, terminates SSL/TLS, applies security policies, and passes the decrypted request to your backend. This separation provides several tangible benefits: centralized SSL certificate management, protection of backend services from direct exposure, the ability to load balance across multiple instances, and a single point for caching and rate limiting.

This guide walks through the complete process of configuring Nginx as a reverse proxy with SSL termination on Ubuntu, from initial installation through production-hardened deployment.

Prerequisites

Before starting, ensure you have the following in place:

  • Ubuntu Server 22.04 LTS or 24.04 LTS with root or sudo access.
  • A registered domain name (e.g., app.example.com) with an A record pointing to your server’s public IP address.
  • A backend application running on a local port (this guide uses port 3000 as an example, but any port works).
  • Ports 80 and 443 open in your firewall or cloud security group.
  • SSH access to your server.

Verify your backend is running before proceeding:

curl -s http://localhost:3000/health

If your backend responds, you are ready to configure Nginx in front of it.

Installing Nginx

Install Nginx from the default Ubuntu repositories:

sudo apt update
sudo apt install nginx -y

Once the installation completes, Nginx starts automatically. Verify the service is running:

sudo systemctl status nginx

You should see active (running) in the output. Confirm the version:

nginx -v

If you are using Ubuntu’s UFW firewall, allow HTTP and HTTPS traffic:

sudo ufw allow 'Nginx Full'
sudo ufw status

The Nginx Full profile opens both port 80 and port 443. At this point, visiting your server’s IP address in a browser should display the default Nginx welcome page.

Understanding the Nginx Directory Structure

Nginx on Ubuntu organizes its files in the following locations:

PathPurpose
/etc/nginx/nginx.confMain configuration file
/etc/nginx/sites-available/Virtual host configuration files
/etc/nginx/sites-enabled/Symlinks to active configurations
/etc/nginx/conf.d/Additional configuration snippets
/var/log/nginx/Access and error logs
/etc/nginx/snippets/Reusable configuration fragments

Configuration files in sites-available are not active until symlinked into sites-enabled. This pattern lets you prepare configurations without immediately activating them.

Basic Reverse Proxy Configuration

Remove the default site and create a new server block for your application:

sudo rm /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/sites-available/app.example.com

Start with a minimal reverse proxy configuration:

upstream backend_app {
    server 127.0.0.1:3000;
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Enable the site and test the configuration:

sudo ln -s /etc/nginx/sites-available/app.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

The nginx -t command validates your configuration syntax before applying changes. Always run it before reloading.

How proxy_pass Works

The proxy_pass directive forwards requests to the specified upstream. The behavior differs slightly based on whether the URI includes a trailing slash:

# Forwards /api/users to backend as /api/users
location /api/ {
    proxy_pass http://backend_app;
}

# Forwards /api/users to backend as /users (strips /api prefix)
location /api/ {
    proxy_pass http://backend_app/;
}

The trailing slash on the proxy_pass URL causes Nginx to replace the matched location prefix with the upstream URI. Choose the behavior that matches your backend’s routing.

Understanding Proxy Headers

Proxy headers are critical for your backend to correctly identify the original client. Without them, your application sees all requests coming from 127.0.0.1 (the Nginx process itself).

Essential Proxy Headers

# The original Host header from the client request
proxy_set_header Host $host;

# The real IP address of the connecting client
proxy_set_header X-Real-IP $remote_addr;

# A chain of all proxy IPs the request has traversed
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# Whether the original request used http or https
proxy_set_header X-Forwarded-Proto $scheme;

# The original port the client connected to
proxy_set_header X-Forwarded-Port $server_port;

Why Each Header Matters

Host: Your backend may host multiple applications or use virtual hosting. Without the original Host header, it cannot determine which site the client intended to reach.

X-Real-IP: Essential for access logging, rate limiting, and geolocation on the backend side. Without it, every request appears to originate from the Nginx server.

X-Forwarded-For: When requests pass through multiple proxies (CDN → load balancer → Nginx), this header preserves the full chain. The $proxy_add_x_forwarded_for variable appends the client IP to any existing value.

X-Forwarded-Proto: Your backend needs to know whether the original connection was HTTP or HTTPS. This affects redirect URLs, cookie security flags, and HSTS enforcement.

Timeouts and Buffering

Configure timeouts to prevent hanging connections:

location / {
    proxy_pass http://backend_app;

    # Time to establish a connection with the backend
    proxy_connect_timeout 30s;

    # Time to wait for the backend to send a response header
    proxy_send_timeout 60s;

    # Time to wait for the backend to finish sending the response body
    proxy_read_timeout 60s;

    # Buffer settings for proxied responses
    proxy_buffering on;
    proxy_buffer_size 4k;
    proxy_buffers 8 16k;
    proxy_busy_buffers_size 32k;
}

Buffering is enabled by default. When Nginx buffers a response, it reads the entire response from the backend before sending it to the client. This frees up the backend connection quickly. Disable buffering only for streaming or server-sent events where latency matters more than throughput.

SSL/TLS Configuration with Let’s Encrypt

SSL termination at the reverse proxy centralizes certificate management. Instead of configuring SSL for every backend independently, you handle it once at the Nginx layer.

Installing Certbot

Certbot automates certificate issuance and renewal through Let’s Encrypt:

sudo apt install certbot python3-certbot-nginx -y

Obtaining a Certificate

Run Certbot with the Nginx plugin. It automatically modifies your server block to enable HTTPS:

sudo certbot --nginx -d app.example.com

Certbot will:

  1. Verify domain ownership via an HTTP-01 challenge.
  2. Obtain a certificate and private key.
  3. Modify your Nginx configuration to listen on port 443.
  4. Add an HTTP-to-HTTPS redirect.
  5. Set up automatic renewal via a systemd timer.

Verify the renewal timer is active:

sudo systemctl status certbot.timer

Test renewal with a dry run:

sudo certbot renew --dry-run

Complete SSL Server Block

After Certbot runs, your configuration should look similar to this. Review it and apply the hardening options described below:

upstream backend_app {
    server 127.0.0.1:3000;
    keepalive 32;
}

# HTTP to HTTPS redirect
server {
    listen 80;
    listen [::]:80;
    server_name app.example.com;

    # Allow Let's Encrypt ACME challenge
    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS server
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;
    server_name app.example.com;

    # SSL certificate paths (managed by Certbot)
    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;

    # SSL hardening
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers off;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/app.example.com/chain.pem;
    resolver 1.1.1.1 8.8.8.8 valid=300s;
    resolver_timeout 5s;

    # SSL session settings
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # Proxy configuration
    location / {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection "";

        proxy_connect_timeout 30s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Generating a Diffie-Hellman Group

For enhanced forward secrecy with TLS 1.2 connections, generate a custom DH parameter file:

sudo openssl dhparam -out /etc/nginx/dhparam.pem 2048

Add it to your SSL server block:

ssl_dhparam /etc/nginx/dhparam.pem;

This prevents attacks against weak default DH parameters used by some older clients.

WebSocket Proxying

WebSocket connections begin as standard HTTP requests and then upgrade to a persistent, full-duplex connection. Nginx must be explicitly configured to pass the upgrade headers.

WebSocket Configuration

# Map block to handle the Connection header for WebSocket upgrades
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 443 ssl;
    http2 on;
    server_name app.example.com;

    # ... SSL directives ...

    # WebSocket endpoint
    location /ws/ {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Increase timeouts for long-lived connections
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }

    # Standard HTTP requests
    location / {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

The map directive dynamically sets the Connection header. When the client sends an Upgrade: websocket header, the connection is upgraded. For regular HTTP requests, the connection behaves normally.

The extended proxy_read_timeout and proxy_send_timeout values prevent Nginx from closing idle WebSocket connections prematurely. Adjust the value based on your application’s expected connection lifetime.

Testing WebSocket Connectivity

Use websocat or wscat to verify the WebSocket endpoint:

# Install wscat
sudo npm install -g wscat

# Connect to the WebSocket endpoint
wscat -c wss://app.example.com/ws/

Load Balancing Multiple Backends

When your application runs on multiple instances, Nginx can distribute traffic across them using the upstream block.

Round Robin (Default)

upstream backend_app {
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
    keepalive 64;
}

Requests rotate across servers in order. This works well when all instances have equivalent capacity.

Weighted Distribution

upstream backend_app {
    server 127.0.0.1:3001 weight=3;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=1;
    keepalive 64;
}

The server with weight=3 receives three times as many requests as the server with weight=1. Use this when instances have different hardware capabilities.

Least Connections

upstream backend_app {
    least_conn;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
    keepalive 64;
}

Nginx sends each new request to the server with the fewest active connections. This strategy is effective when request processing times vary significantly.

IP Hash (Session Persistence)

upstream backend_app {
    ip_hash;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
}

The client’s IP address determines which server handles their requests, providing basic session persistence. Note that keepalive is not used with ip_hash because connections are already bound to specific servers.

Health Checks and Failover

Configure passive health checks with max_fails and fail_timeout:

upstream backend_app {
    least_conn;

    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3003 max_fails=3 fail_timeout=30s backup;

    keepalive 64;
}
  • max_fails=3: After 3 failed attempts, mark the server as unavailable.
  • fail_timeout=30s: Wait 30 seconds before retrying a failed server.
  • backup: The third server only receives traffic when the primary servers are all down.

Caching

Nginx can cache responses from your backend, reducing load and improving response times for frequently requested content.

Configuring the Cache Zone

Define the cache zone in the http block of /etc/nginx/nginx.conf:

http {
    # Define a cache zone named 'app_cache'
    proxy_cache_path /var/cache/nginx/app_cache
        levels=1:2
        keys_zone=app_cache:10m
        max_size=1g
        inactive=60m
        use_temp_path=off;

    # ... other http directives ...
}
ParameterPurpose
levels=1:2Two-level directory hierarchy for cached files
keys_zone=app_cache:10m10 MB shared memory zone for cache keys
max_size=1gMaximum disk space for cached responses
inactive=60mRemove items not accessed in 60 minutes
use_temp_path=offWrite directly to cache directory (avoid extra copy)

Using the Cache in a Location Block

server {
    listen 443 ssl;
    http2 on;
    server_name app.example.com;

    # ... SSL directives ...

    # Cached static content
    location /api/public/ {
        proxy_pass http://backend_app;

        proxy_cache app_cache;
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_lock on;

        add_header X-Cache-Status $upstream_cache_status;
    }

    # Non-cached dynamic content
    location / {
        proxy_pass http://backend_app;
        proxy_no_cache 1;
        proxy_cache_bypass 1;
    }
}

The X-Cache-Status header reveals whether each response was a HIT, MISS, EXPIRED, or BYPASS, which is invaluable for debugging cache behavior.

The proxy_cache_use_stale directive tells Nginx to serve cached (potentially stale) content when the backend is unreachable or returns an error. This significantly improves resilience.

Cache Key Customization

By default, Nginx uses the full request URL as the cache key. Customize it when you need to vary by additional parameters:

proxy_cache_key "$scheme$request_method$host$request_uri";

To include a cookie or header in the cache key:

proxy_cache_key "$scheme$request_method$host$request_uri$cookie_lang";

Purging the Cache

Clear the cache manually when needed:

sudo rm -rf /var/cache/nginx/app_cache/*
sudo systemctl reload nginx

Rate Limiting

Rate limiting protects your backend from abuse, brute-force attacks, and traffic spikes.

Defining Rate Limit Zones

Add rate limit zones to the http block in /etc/nginx/nginx.conf:

http {
    # General rate limit: 10 requests per second per IP
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

    # Strict rate limit for login endpoints: 5 requests per minute per IP
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

    # API rate limit: 30 requests per second per IP
    limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;

    # ... other http directives ...
}

Applying Rate Limits

server {
    listen 443 ssl;
    http2 on;
    server_name app.example.com;

    # ... SSL directives ...

    # Login endpoint with strict rate limiting
    location /auth/login {
        limit_req zone=login burst=3 nodelay;
        limit_req_status 429;

        proxy_pass http://backend_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # API endpoints
    location /api/ {
        limit_req zone=api burst=20 nodelay;
        limit_req_status 429;

        proxy_pass http://backend_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Everything else
    location / {
        limit_req zone=general burst=20 nodelay;

        proxy_pass http://backend_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

The burst parameter allows short spikes above the defined rate. With nodelay, burst requests are processed immediately rather than being queued and delayed. The limit_req_status 429 returns a proper “Too Many Requests” HTTP status code instead of the default 503.

Connection Limiting

In addition to request rate limiting, cap the number of concurrent connections per IP:

http {
    limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
}

server {
    # ... other directives ...

    limit_conn conn_per_ip 20;
    limit_conn_status 429;
}

Security Headers

Add security headers to every proxied response. These headers instruct browsers to enforce security policies that protect against common attacks.

server {
    listen 443 ssl;
    http2 on;
    server_name app.example.com;

    # ... SSL directives ...

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
    add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self'; connect-src 'self'; frame-ancestors 'self';" always;

    # Hide Nginx version from response headers
    server_tokens off;

    # Limit request body size (adjust for your application)
    client_max_body_size 10m;

    location / {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Remove potentially sensitive headers from backend responses
        proxy_hide_header X-Powered-By;
        proxy_hide_header Server;
    }
}

Header Reference

HeaderPurpose
Strict-Transport-SecurityForces HTTPS for the specified duration (HSTS)
X-Content-Type-OptionsPrevents MIME type sniffing
X-Frame-OptionsControls whether the page can be framed (clickjacking protection)
Referrer-PolicyControls how much referrer information is sent
Permissions-PolicyRestricts browser feature access (camera, microphone, etc.)
Content-Security-PolicyDefines allowed content sources (XSS mitigation)

Adjust the Content-Security-Policy header to match your application’s actual requirements. An overly restrictive policy will break legitimate functionality. Test thoroughly in a staging environment before deploying to production.

Monitoring and Logging

Custom Access Log Format

Define a detailed log format that captures proxy-related information:

http {
    log_format proxy_log '$remote_addr - $remote_user [$time_local] '
                         '"$request" $status $body_bytes_sent '
                         '"$http_referer" "$http_user_agent" '
                         'upstream=$upstream_addr '
                         'upstream_status=$upstream_status '
                         'request_time=$request_time '
                         'upstream_response_time=$upstream_response_time '
                         'cache_status=$upstream_cache_status';
}

Apply it to your server block:

server {
    access_log /var/log/nginx/app.example.com.access.log proxy_log;
    error_log /var/log/nginx/app.example.com.error.log warn;

    # ... other directives ...
}

JSON Log Format

For integration with log aggregation tools (Elasticsearch, Loki, Datadog), use a JSON format:

http {
    log_format json_log escape=json '{'
        '"time": "$time_iso8601", '
        '"remote_addr": "$remote_addr", '
        '"request_method": "$request_method", '
        '"request_uri": "$request_uri", '
        '"status": $status, '
        '"body_bytes_sent": $body_bytes_sent, '
        '"http_referer": "$http_referer", '
        '"http_user_agent": "$http_user_agent", '
        '"upstream_addr": "$upstream_addr", '
        '"upstream_status": "$upstream_status", '
        '"request_time": $request_time, '
        '"upstream_response_time": "$upstream_response_time", '
        '"ssl_protocol": "$ssl_protocol", '
        '"ssl_cipher": "$ssl_cipher"'
    '}';
}

Nginx Stub Status Module

Enable the stub status module for basic monitoring metrics:

server {
    # ... other directives ...

    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        allow ::1;
        deny all;
    }
}

Access it locally to see active connections, accepted connections, and request counts:

curl http://127.0.0.1/nginx_status

Output:

Active connections: 42
server accepts handled requests
 15234 15234 98765
Reading: 2 Writing: 5 Waiting: 35

Integrate this endpoint with monitoring tools like Prometheus (via the nginx-prometheus-exporter), Grafana, or Nagios.

Log Rotation

Ubuntu includes logrotate by default. Nginx installs a logrotate configuration at /etc/logrotate.d/nginx. Verify it exists and covers your custom log files:

cat /etc/logrotate.d/nginx

If your log files are in non-standard locations, add them to the configuration or create a custom entry.

Troubleshooting Common Issues

502 Bad Gateway

A 502 error means Nginx received an invalid response from the upstream server.

Common causes and solutions:

# Check if the backend is running
sudo systemctl status your-backend-app

# Verify the backend port is listening
sudo ss -tlnp | grep 3000

# Check Nginx error logs
sudo tail -50 /var/log/nginx/app.example.com.error.log

If the backend is running but 502 errors persist, check SELinux or AppArmor restrictions:

# Check if SELinux is blocking Nginx network connections
sudo setsebool -P httpd_can_network_connect 1

504 Gateway Timeout

The backend did not respond within the configured timeout period.

# Increase timeouts for slow backends
proxy_connect_timeout 60s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;

Investigate why the backend is slow. Common causes include database queries, external API calls, or resource exhaustion.

Mixed Content Warnings

If your application generates URLs with http:// after enabling HTTPS, ensure the X-Forwarded-Proto header is set and your application reads it:

proxy_set_header X-Forwarded-Proto $scheme;

Your application framework must be configured to trust this header. For example, in Express.js:

app.set('trust proxy', 1);

Large Request Body Errors (413)

If file uploads or large POST requests fail with a 413 error:

# Increase to match your application's needs
client_max_body_size 50m;

Permission Denied Errors in Logs

Nginx worker processes run as the www-data user. Ensure cache directories and log files are writable:

sudo chown -R www-data:www-data /var/cache/nginx/
sudo chmod -R 755 /var/cache/nginx/

Configuration Debugging

Test your configuration and show detailed error messages:

# Test syntax
sudo nginx -t

# Show the full effective configuration (all includes resolved)
sudo nginx -T

# Test a specific configuration file
sudo nginx -t -c /etc/nginx/nginx.conf

Certificate Renewal Issues

If Certbot renewal fails, check the ACME challenge location is accessible:

# Test the challenge path
curl -v http://app.example.com/.well-known/acme-challenge/test

# Force renewal
sudo certbot renew --force-renewal

# Check Certbot logs
sudo journalctl -u certbot

Ensure port 80 is open and the HTTP server block correctly serves the .well-known directory.

Complete Production Configuration

Here is a full production-ready configuration combining all the concepts covered in this guide:

# /etc/nginx/nginx.conf additions (http block)
# proxy_cache_path /var/cache/nginx/app_cache levels=1:2 keys_zone=app_cache:10m max_size=1g inactive=60m use_temp_path=off;
# limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
# limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;

# /etc/nginx/sites-available/app.example.com

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream backend_app {
    least_conn;
    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3003 max_fails=3 fail_timeout=30s backup;
    keepalive 64;
}

# HTTP → HTTPS redirect
server {
    listen 80;
    listen [::]:80;
    server_name app.example.com;

    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# Main HTTPS server
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;
    server_name app.example.com;

    # SSL certificates
    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/app.example.com/chain.pem;

    # TLS hardening
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers off;
    ssl_dhparam /etc/nginx/dhparam.pem;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 1.1.1.1 8.8.8.8 valid=300s;
    resolver_timeout 5s;

    # SSL sessions
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

    # General settings
    server_tokens off;
    client_max_body_size 10m;
    limit_conn conn_per_ip 20;

    # Logging
    access_log /var/log/nginx/app.example.com.access.log proxy_log;
    error_log /var/log/nginx/app.example.com.error.log warn;

    # Health check / monitoring
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        allow ::1;
        deny all;
    }

    # Login rate limiting
    location /auth/login {
        limit_req zone=login burst=3 nodelay;
        limit_req_status 429;

        proxy_pass http://backend_app;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # WebSocket endpoint
    location /ws/ {
        proxy_pass http://backend_app;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }

    # Static asset caching
    location /static/ {
        proxy_pass http://backend_app;
        proxy_cache app_cache;
        proxy_cache_valid 200 302 30m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_lock on;
        add_header X-Cache-Status $upstream_cache_status;

        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Default proxy
    location / {
        limit_req zone=general burst=20 nodelay;

        proxy_pass http://backend_app;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection "";

        proxy_connect_timeout 30s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        proxy_hide_header X-Powered-By;
        proxy_hide_header Server;
    }
}

Verifying Your Configuration

After deploying the configuration, run through this verification checklist:

# 1. Test configuration syntax
sudo nginx -t

# 2. Reload Nginx
sudo systemctl reload nginx

# 3. Verify HTTPS works
curl -I https://app.example.com

# 4. Confirm HTTP redirects to HTTPS
curl -I http://app.example.com

# 5. Test SSL grade (use an external tool)
# Visit: https://www.ssllabs.com/ssltest/analyze.html?d=app.example.com

# 6. Check security headers
curl -s -D- https://app.example.com | grep -iE '(strict-transport|x-content-type|x-frame|referrer-policy|permissions-policy)'

# 7. Verify WebSocket connectivity (if applicable)
wscat -c wss://app.example.com/ws/

# 8. Monitor logs for errors
sudo tail -f /var/log/nginx/app.example.com.error.log

Summary

Setting up Nginx as a reverse proxy with SSL termination involves several layers of configuration, each addressing a specific operational concern:

  1. Reverse proxying forwards client requests to backend servers on local ports, keeping them off the public internet.
  2. Proxy headers preserve the original client information through the proxy layer.
  3. SSL/TLS termination centralizes certificate management and offloads encryption from your application.
  4. WebSocket support requires explicit header passing for connection upgrades.
  5. Load balancing distributes traffic across multiple backend instances with health checking.
  6. Caching reduces backend load and improves response times for repeat requests.
  7. Rate limiting protects against abuse and brute-force attacks.
  8. Security headers instruct browsers to enforce protection against XSS, clickjacking, and other client-side attacks.
  9. Structured logging provides visibility into proxy behavior and upstream health.

Each section in this guide can be adopted independently. Start with basic reverse proxying and SSL, then layer on caching, rate limiting, and load balancing as your application scales.