HAProxy Load Balancer: High Availability Configuration Guide
HAProxy (High Availability Proxy) is the industry standard open-source load balancer and reverse proxy for TCP and HTTP‑based applications. It is used by some of the highest-traffic websites in the world, handling millions of concurrent connections with minimal resource consumption. HAProxy operates at both Layer 4 (TCP) and Layer 7 (HTTP), giving you the flexibility to route traffic based on everything from simple round-robin to complex content-based rules.
This guide covers a production-grade HAProxy setup with SSL termination, health checks, sticky sessions, rate limiting, and high availability using Keepalived.
Prerequisites
- Two Ubuntu Server 22.04/24.04 machines for the HAProxy active/standby pair.
- Two or more backend web servers running your application (this guide uses Nginx or any HTTP server on port 8080).
- A valid SSL certificate and private key (from Let’s Encrypt or a commercial CA).
- Root or sudo access on all servers.
- A spare IP address on your network to use as the Virtual IP (VIP).
Verify that all servers can communicate with each other on the required ports:
# From HAProxy node, test backend connectivity
nc -zv 192.168.1.21 8080
nc -zv 192.168.1.22 8080
Installing HAProxy
Install HAProxy from Ubuntu’s default repository:
sudo apt update
sudo apt install haproxy -y
For the latest stable release (recommended for production), add the official HAProxy PPA:
sudo add-apt-repository ppa:vbernat/haproxy-2.9 -y
sudo apt update
sudo apt install haproxy -y
Verify the installation:
haproxy -v
Enable HAProxy to start on boot:
sudo systemctl enable haproxy
Understanding HAProxy Configuration Structure
The configuration file lives at /etc/haproxy/haproxy.cfg and is divided into four main sections:
| Section | Purpose |
|---|---|
global | Process-wide settings: logging, max connections, user/group, SSL tuning |
defaults | Default parameters inherited by all frontend and backend sections |
frontend | Defines how incoming connections are received (bind address, port, SSL) |
backend | Defines the pool of servers that handle requests and how they are balanced |
An optional listen section combines frontend and backend into a single block, which is useful for simpler configurations like statistics pages.
Base Configuration
Back up the default configuration and start fresh:
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
Create the production configuration:
sudo nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# SSL tuning
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
tune.ssl.default-dh-param 2048
maxconn 50000
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
timeout connect 5s
timeout client 30s
timeout server 30s
timeout http-request 10s
timeout http-keep-alive 10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
Configuring the Frontend
HTTP to HTTPS Redirect
frontend http_front
bind *:80
# Redirect all HTTP traffic to HTTPS
http-request redirect scheme https code 301 unless { ssl_fc }
HTTPS Frontend with SSL Termination
Combine your certificate, intermediate CA, and private key into a single PEM file:
sudo cat /etc/ssl/certs/example.com.crt \
/etc/ssl/certs/ca-bundle.crt \
/etc/ssl/private/example.com.key \
| sudo tee /etc/ssl/private/example.com.pem > /dev/null
sudo chmod 600 /etc/ssl/private/example.com.pem
Configure the HTTPS frontend:
frontend https_front
bind *:443 ssl crt /etc/ssl/private/example.com.pem alpn h2,http/1.1
mode http
# Security headers
http-response set-header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
http-response set-header X-Frame-Options SAMEORIGIN
http-response set-header X-Content-Type-Options nosniff
# ACL-based routing
acl is_api path_beg /api/
acl is_static path_beg /static/ /images/ /css/ /js/
use_backend api_servers if is_api
use_backend static_servers if is_static
default_backend web_servers
Configuring Backends
Web Application Backend
backend web_servers
balance roundrobin
option httpchk GET /health HTTP/1.1\r\nHost:\ localhost
http-check expect status 200
cookie SERVERID insert indirect nocache
server web-01 192.168.1.21:8080 check inter 5s fall 3 rise 2 cookie web01
server web-02 192.168.1.22:8080 check inter 5s fall 3 rise 2 cookie web02
server web-03 192.168.1.23:8080 check inter 5s fall 3 rise 2 cookie web03
server web-04 192.168.1.24:8080 check inter 5s fall 3 rise 2 cookie web04 backup
API Backend
backend api_servers
balance leastconn
option httpchk GET /api/health HTTP/1.1\r\nHost:\ localhost
http-check expect status 200
server api-01 192.168.1.31:3000 check inter 5s fall 3 rise 2
server api-02 192.168.1.32:3000 check inter 5s fall 3 rise 2
Static Content Backend
backend static_servers
balance roundrobin
option httpchk GET /healthz HTTP/1.1\r\nHost:\ localhost
server static-01 192.168.1.41:80 check inter 10s fall 3 rise 2
server static-02 192.168.1.42:80 check inter 10s fall 3 rise 2
Understanding Health Check Parameters
| Parameter | Meaning |
|---|---|
check | Enable health checks for this server |
inter 5s | Check every 5 seconds |
fall 3 | Mark server as DOWN after 3 consecutive failures |
rise 2 | Mark server as UP after 2 consecutive successes |
backup | Only receive traffic when all primary servers are down |
Load Balancing Algorithms
HAProxy supports several algorithms. Choose based on your workload:
# Distribute evenly across servers in order
balance roundrobin
# Send to the server with fewest active connections
balance leastconn
# Route based on a hash of the source IP (basic stickiness)
balance source
# Route based on URI hash (good for caching)
balance uri
For most web applications, roundrobin or leastconn works well. Use leastconn for backends with varying response times.
Sticky Sessions
When your application stores session state locally (not in a shared store like Redis), you need sticky sessions to ensure a user’s requests consistently reach the same backend server.
Cookie-Based Persistence
backend web_servers
balance roundrobin
cookie SERVERID insert indirect nocache
server web-01 192.168.1.21:8080 check cookie web01
server web-02 192.168.1.22:8080 check cookie web02
HAProxy inserts a SERVERID cookie into the response. Subsequent requests from the client include this cookie, and HAProxy routes them to the corresponding server.
Stick-Table Persistence
For persistence without modifying cookies:
backend web_servers
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server web-01 192.168.1.21:8080 check
server web-02 192.168.1.22:8080 check
Rate Limiting
Protect your backends from abuse using stick-tables to track request rates:
frontend https_front
bind *:443 ssl crt /etc/ssl/private/example.com.pem
# Track request rate per source IP
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
default_backend web_servers
This denies requests with a 429 status code if a single IP exceeds 100 requests in 10 seconds.
HAProxy Statistics Dashboard
Enable the built-in stats page:
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
stats auth admin:YourSecurePassword
Access the dashboard at http://your-server:8404/stats. The dashboard shows real-time metrics for every frontend, backend, and server including connection counts, response times, health status, and error rates.
Security note: Restrict access to the stats page by binding to a private interface or using ACLs. Never expose it to the public internet without authentication.
Validating and Applying Configuration
Always validate before reloading:
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
If validation passes, reload the service with zero downtime:
sudo systemctl reload haproxy
HAProxy performs a graceful reload — it starts new worker processes for fresh connections while existing connections drain on the old workers.
High Availability with Keepalived
A single HAProxy instance is a single point of failure. Keepalived provides automatic failover between two HAProxy nodes using a shared Virtual IP (VIP).
Install Keepalived on Both Nodes
sudo apt install keepalived -y
Create the HAProxy Health Check Script
sudo tee /etc/keepalived/check_haproxy.sh > /dev/null << 'EOF'
#!/bin/bash
if ! pidof haproxy > /dev/null; then
exit 1
fi
EOF
sudo chmod 755 /etc/keepalived/check_haproxy.sh
Configure the Active (MASTER) Node
sudo nano /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass YourVRRPPass
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
Configure the Standby (BACKUP) Node
The configuration is identical except:
state BACKUP
priority 90
Enable and Start Keepalived
On both nodes:
sudo systemctl enable keepalived
sudo systemctl start keepalived
Verify the VIP
On the active node:
ip addr show eth0 | grep 192.168.1.100
You should see the VIP attached to the interface. If you stop HAProxy on the master:
sudo systemctl stop haproxy
The VIP will move to the backup node within seconds. Start HAProxy again, and the VIP returns to the master (because of its higher priority).
Logging and Monitoring
Configure Rsyslog for HAProxy
HAProxy logs to syslog. Create a dedicated log file:
sudo tee /etc/rsyslog.d/49-haproxy.conf > /dev/null << 'EOF'
$AddUnixListenSocket /var/lib/haproxy/dev/log
local0.* /var/log/haproxy/haproxy.log
local1.notice /var/log/haproxy/haproxy-admin.log
EOF
sudo mkdir -p /var/log/haproxy
sudo systemctl restart rsyslog
sudo systemctl restart haproxy
Log Rotation
sudo tee /etc/logrotate.d/haproxy > /dev/null << 'EOF'
/var/log/haproxy/*.log {
daily
rotate 14
missingok
notifempty
compress
delaycompress
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
EOF
Monitoring with the Stats Socket
Query live server status from the command line:
echo "show stat" | sudo socat stdio /run/haproxy/admin.sock | cut -d, -f1,2,18 | column -t -s,
Drain a server for maintenance without dropping connections:
echo "set server web_servers/web-01 state drain" | sudo socat stdio /run/haproxy/admin.sock
Troubleshooting
Common Issues
502 Bad Gateway: The backend is unreachable. Verify the backend server is running and the port is correct:
curl -v http://192.168.1.21:8080/health
503 Service Unavailable:
All backend servers are down. Check zpool status in the stats page or:
echo "show servers state" | sudo socat stdio /run/haproxy/admin.sock
Connection timeouts:
Increase timeout values in the defaults section, or check network-level issues between HAProxy and backends.
SSL handshake failures: Verify the PEM file contains the certificate chain in the correct order (server cert → intermediate → key):
openssl x509 -in /etc/ssl/private/example.com.pem -noout -subject -dates
Testing Failover
From a client, run continuous requests against the VIP while stopping/starting HAProxy and backends:
while true; do
curl -s -o /dev/null -w "%{http_code} %{time_total}s\n" https://192.168.1.100/health
sleep 1
done
You should see seamless failover with no HTTP errors when the standby takes over.
Summary
HAProxy provides a battle-tested foundation for distributing traffic across your application servers. The key takeaways from this guide:
- Use Layer 7 mode for HTTP applications to take advantage of content-based routing, header manipulation, and cookie persistence.
- Always configure active health checks — never rely on passive detection alone.
- Terminate SSL at HAProxy to centralize certificate management and offload encryption from backends.
- Use stick-tables for both session persistence and rate limiting.
- Deploy HAProxy in pairs with Keepalived and a floating VIP to eliminate the load balancer as a single point of failure.
- Monitor with the built-in stats dashboard and the admin socket for real-time visibility.
- Validate every configuration change with
haproxy -cbefore reloading.
A well-configured HAProxy setup can handle hundreds of thousands of concurrent connections while providing sub-millisecond routing decisions and automatic failover for both the proxy layer and the backend servers.