TL;DR — Quick Summary
Complete HAProxy guide for TCP and HTTP load balancing with high availability. Covers ACLs, health checks, SSL termination, stick tables, and keepalived VRRP.
HAProxy is the reference implementation for open-source load balancing. Where Nginx doubles as a web server and Caddy prioritizes ease of use, HAProxy is purpose-built for proxying and load balancing — and it shows in every benchmark. This guide covers the complete HAProxy stack: architecture, installation on Ubuntu and RHEL, global and defaults tuning, frontend/backend configuration with ACLs, all balancing algorithms, TCP and HTTP health checks, SSL/TLS termination, the stats page, stick tables, rate limiting, rsyslog integration, and a full keepalived VRRP setup for two-node high availability.
Prerequisites
Before you begin, make sure you have:
- Ubuntu 22.04/24.04 or RHEL 9/AlmaLinux 9 with sudo access
- At least two backend application servers (for meaningful load balancing)
- Two HAProxy nodes (for the high-availability section with keepalived)
- A basic understanding of TCP/IP and HTTP
- HAProxy 2.6 or later (LTS release recommended)
HAProxy Architecture
HAProxy runs as a single event-driven process (or a configurable number of processes/threads). All I/O is non-blocking and multiplexed, allowing HAProxy to handle tens of thousands of concurrent connections with minimal RAM.
A configuration file has four main section types:
- global — process-wide settings (maxconn, log, chroot, user/group, nbthread)
- defaults — default settings inherited by all frontends and backends unless overridden
- frontend — listens for incoming connections; applies ACLs and routes traffic to backends
- backend — defines the server pool, balancing algorithm, and health check parameters
- listen — a combined frontend+backend shortcut, useful for the stats page or simple TCP proxies
Traffic flows: client → frontend → ACL evaluation → use_backend rule → backend → server.
Installation
Ubuntu / Debian
sudo apt update
sudo apt install haproxy -y
haproxy -v
For a newer LTS release add the official PPA:
sudo add-apt-repository ppa:vbernat/haproxy-2.8 -y
sudo apt install haproxy=2.8.\* -y
RHEL / AlmaLinux / Rocky Linux
sudo dnf install haproxy -y
haproxy -v
sudo systemctl enable --now haproxy
Global and Defaults Configuration
Open /etc/haproxy/haproxy.cfg and replace the boilerplate with a production-ready base:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 50000
nbthread 4
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout http-keep-alive 10s
timeout check 5s
maxconn 3000
Key parameters explained:
maxconn 50000— global connection cap; tune based on available RAM (~1 MB per 1000 conns)nbthread 4— run four threads on a quad-core system; HAProxy 2.x is multi-threadedoption dontlognull— suppress logs for health check probes with no payloadoption forwardfor— injectX-Forwarded-Forheader into proxied requestsoption http-server-close— close server-side connections after each request while keeping client-side alive (best for HTTP/1.1 performance)retries 3— retry failed connection attempts before marking a server downtimeout connect 5s— maximum time to establish a TCP connection to a backendtimeout client 30s/timeout server 30s— idle timeout on client and server sides
Frontend Configuration
A frontend defines what HAProxy listens on and where it routes traffic:
frontend http_front
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
http-request redirect scheme https unless { ssl_fc }
http-request set-header X-Forwarded-Proto https if { ssl_fc }
# ACL definitions
acl is_api path_beg /api/
acl is_static path_beg /static/ /assets/ /images/
acl host_admin hdr(host) -i admin.example.com
# Routing rules
use_backend api_backend if is_api
use_backend static_backend if is_static
use_backend admin_backend if host_admin
default_backend app_backend
ACL matchers commonly used:
| Matcher | Example | Matches |
|---|---|---|
path_beg | path_beg /api/ | URI starts with /api/ |
path_end | path_end .php | URI ends with .php |
hdr(host) | hdr(host) -i api.example.com | Host header (case-insensitive) |
src | src 10.0.0.0/8 | Client IP range |
method | method POST | HTTP method |
status | n/a (response ACL) | Response status code |
Multiple ACLs can be combined with if, unless, or, and !:
use_backend premium_backend if is_api { hdr(X-Plan) -i premium }
Backend Configuration
A backend defines the server pool and how traffic is distributed:
backend app_backend
balance roundrobin
option httpchk GET /health HTTP/1.1\r\nHost:\ app.example.com
server app1 10.0.0.10:8080 weight 3 maxconn 500 check inter 2s fall 3 rise 2
server app2 10.0.0.11:8080 weight 3 maxconn 500 check inter 2s fall 3 rise 2
server app3 10.0.0.12:8080 weight 1 maxconn 200 check inter 2s fall 3 rise 2
server app_backup 10.0.0.20:8080 backup check inter 5s fall 2 rise 1
Balance Algorithms
| Algorithm | Directive | Best For |
|---|---|---|
| Round-robin | balance roundrobin | Uniform short requests |
| Weighted RR | balance roundrobin + weight N | Mixed-capacity servers |
| Least connections | balance leastconn | Variable response times, long sessions |
| Source IP hash | balance source | Session affinity by client IP |
| URI hash | balance uri | Cache servers, same URI → same backend |
| Header hash | balance hdr(User-Agent) | Route by any HTTP header |
Server Options
weight N— relative traffic share (default 1); a server with weight 3 gets 3× the requestsmaxconn N— per-server concurrent connection limit; excess connections queue in HAProxycheck— enable active health checkinginter 2s— health check intervalfall 3— consecutive failures before marking server DOWNrise 2— consecutive successes before marking server UP againbackup— only receives traffic when all non-backup servers are unavailable
Health Checks
TCP Health Check (default)
HAProxy opens a TCP connection to verify the port is listening. No application-level verification:
backend tcp_backend
balance leastconn
server db1 10.0.0.30:5432 check inter 3s fall 2 rise 1
server db2 10.0.0.31:5432 check inter 3s fall 2 rise 1
HTTP Health Check
For HTTP backends, verify the application returns a healthy status code:
backend app_backend
option httpchk GET /health HTTP/1.1\r\nHost:\ app.example.com
http-check expect status 200
server app1 10.0.0.10:8080 check inter 2s fall 3 rise 2
server app2 10.0.0.11:8080 check inter 2s fall 3 rise 2
http-check expect status 200 marks a server DOWN if the health endpoint returns anything other than HTTP 200. You can also match on a response body string:
http-check expect string "ok"
SSL/TLS Termination
HAProxy terminates SSL on the frontend and forwards plain TCP or HTTP to backends. Combine certificates and private keys into a single PEM:
cat fullchain.pem privkey.pem > /etc/haproxy/certs/example.com.pem
chmod 600 /etc/haproxy/certs/example.com.pem
Multi-certificate SNI (one directory, HAProxy picks the right cert automatically):
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/
http-request set-header X-Forwarded-Proto https
http-response set-header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
default_backend app_backend
Place example.com.pem, api.example.com.pem, etc. in /etc/haproxy/certs/ and HAProxy selects the correct certificate via SNI for each connection.
HTTP Mode Features
Header manipulation in HAProxy:
backend app_backend
# Add a header to every request forwarded to backends
http-request set-header X-Real-IP %[src]
http-request set-header X-Request-ID %[uuid()]
# Remove sensitive headers before forwarding
http-request del-header X-Internal-Token
# Add a header to every response sent to clients
http-response set-header X-Powered-By ""
http-response set-header Cache-Control "no-store" if { status 401 }
server app1 10.0.0.10:8080 check
server app2 10.0.0.11:8080 check
Stats Page
The built-in HAProxy stats page provides real-time connection metrics in a browser UI:
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats auth admin:changeme
stats show-legends
stats show-node
stats hide-version
Access it at http://your-server:8404/stats. Protect this port with a firewall rule:
sudo ufw allow from 10.0.0.0/8 to any port 8404
sudo ufw deny 8404
Alternatively, scope it to the management interface by binding to an internal IP instead of *.
Stick Tables for Session Persistence
Stick tables track client→server mappings in shared memory, providing session persistence without requiring application-level shared sessions:
backend app_backend
balance roundrobin
# Create a stick table keyed on source IP, expire entries after 30 minutes
stick-table type ip size 100k expire 30m
stick on src
server app1 10.0.0.10:8080 check
server app2 10.0.0.11:8080 check
server app3 10.0.0.12:8080 check
For cookie-based persistence (preferred for stateful web applications):
backend app_backend
balance roundrobin
cookie SERVERID insert indirect nocache
server app1 10.0.0.10:8080 cookie app1 check
server app2 10.0.0.11:8080 cookie app2 check
HAProxy inserts a SERVERID cookie on the first response and uses it to route subsequent requests from the same browser to the same backend server.
Rate Limiting with Stick Tables
HAProxy’s stick tables also power rate limiting without external dependencies:
frontend http_front
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
# Stick table: track HTTP request rate per source IP
stick-table type ip size 200k expire 10s store http_req_rate(10s)
http-request track-sc0 src
# Deny if more than 100 requests in 10 seconds
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
default_backend app_backend
For stricter limits on specific paths (e.g., login endpoint):
frontend http_front
stick-table type ip size 50k expire 60s store http_req_rate(60s)
http-request track-sc0 src if { path_beg /api/auth/ }
http-request deny deny_status 429 if { path_beg /api/auth/ } { sc_http_req_rate(0) gt 10 }
This limits each IP to 10 authentication requests per minute before returning HTTP 429.
Logging with rsyslog
HAProxy logs to a syslog socket. Configure rsyslog to write HAProxy logs to a dedicated file:
Create /etc/rsyslog.d/49-haproxy.conf:
$AddUnixListenSocket /dev/log
:programname, startswith, "haproxy" {
/var/log/haproxy.log
stop
}
Restart rsyslog and HAProxy:
sudo systemctl restart rsyslog
sudo systemctl restart haproxy
Custom log format capturing response time and backend server:
defaults
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %tsc %ac/%fc/%bc/%sc/%rc %{+Q}r"
Key format tokens: %ci (client IP), %b/%s (backend/server name), %TR (request time), %Tr (response time), %ST (status code), %B (bytes sent).
Tail and filter logs:
sudo tail -f /var/log/haproxy.log | grep "500\|502\|503"
HAProxy with Keepalived for VRRP Failover
keepalived runs VRRP (Virtual Router Redundancy Protocol) to assign a shared virtual IP between two HAProxy nodes. If the primary node fails, the standby takes over the virtual IP within one or two seconds.
Install keepalived
sudo apt install keepalived -y # Ubuntu
sudo dnf install keepalived -y # RHEL
Primary node — /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 110
advert_int 1
authentication {
auth_type PASS
auth_pass s3cur3p4ss
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
Standby node — /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass s3cur3p4ss
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
Enable and start
sudo systemctl enable --now keepalived
ip addr show eth0 | grep 192.168.1.100 # should appear on primary only
The vrrp_script checks whether the HAProxy process is running every 2 seconds. If HAProxy dies on the primary, the weight drops by 20 and the standby (priority 100) wins election and claims the virtual IP.
Load Balancer Comparison
| Feature | HAProxy | Nginx | Traefik | Envoy | AWS ALB |
|---|---|---|---|---|---|
| TCP load balancing | Yes (native) | Yes | Yes | Yes | Yes |
| HTTP/2 to backends | Yes (2.0+) | Yes | Yes | Yes | Yes |
| Active health checks | Yes (built-in) | Plus only | Yes | Yes | Yes |
| Stats/metrics UI | Built-in | stub_status | Dashboard | Admin API | CloudWatch |
| Rate limiting | Stick tables | limit_req | Middleware | Filters | WAF |
| Dynamic config | Runtime API | Reload | File watch | xDS API | Managed |
| SSL termination | Yes | Yes | Yes | Yes | Yes |
| VRRP/HA | Via keepalived | Via keepalived | Not built-in | Not built-in | Managed |
| Resource footprint | Very low | Low | Medium | High | Managed |
| Best for | TCP+HTTP HA | Web + cache | Containers | Service mesh | AWS-native |
HAProxy wins on raw TCP performance, granular health checks, and memory efficiency. Nginx adds response caching. Traefik and Envoy excel in dynamic container/mesh environments. AWS ALB is the zero-ops choice within the AWS ecosystem.
Production Configuration: Web Application Load Balancer
A complete, production-ready HAProxy configuration for a three-node web application behind HTTPS:
global
log /dev/log local0
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
user haproxy
group haproxy
daemon
maxconn 50000
nbthread 4
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:!aNULL:!MD5
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout http-keep-alive 10s
timeout check 5s
#--------------------------------------------------------------------
# Stats
#--------------------------------------------------------------------
listen stats
bind 10.0.0.1:8404
stats enable
stats uri /stats
stats refresh 15s
stats auth admin:changeme
stats show-legends
#--------------------------------------------------------------------
# HTTP → HTTPS redirect
#--------------------------------------------------------------------
frontend http_redirect
bind *:80
http-request redirect scheme https code 301
#--------------------------------------------------------------------
# HTTPS frontend
#--------------------------------------------------------------------
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/
http-request set-header X-Forwarded-Proto https
http-response set-header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
# Rate limiting: 200 req/10s per IP
stick-table type ip size 200k expire 10s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 200 }
# ACL routing
acl is_api path_beg /api/
use_backend api_backend if is_api
default_backend app_backend
#--------------------------------------------------------------------
# App backend
#--------------------------------------------------------------------
backend app_backend
balance leastconn
option httpchk GET /health HTTP/1.1\r\nHost:\ app.example.com
http-check expect status 200
cookie SERVERID insert indirect nocache
http-request set-header X-Real-IP %[src]
server app1 10.0.0.10:8080 cookie s1 weight 1 maxconn 1000 check inter 2s fall 3 rise 2
server app2 10.0.0.11:8080 cookie s2 weight 1 maxconn 1000 check inter 2s fall 3 rise 2
server app3 10.0.0.12:8080 cookie s3 weight 1 maxconn 1000 check inter 2s fall 3 rise 2
server app_backup 10.0.0.20:8080 cookie s4 backup check inter 5s fall 2 rise 1
#--------------------------------------------------------------------
# API backend (no session persistence needed)
#--------------------------------------------------------------------
backend api_backend
balance roundrobin
option httpchk GET /api/health HTTP/1.1\r\nHost:\ app.example.com
http-check expect status 200
http-request set-header X-Real-IP %[src]
server api1 10.0.0.10:8080 weight 1 maxconn 500 check inter 2s fall 3 rise 2
server api2 10.0.0.11:8080 weight 1 maxconn 500 check inter 2s fall 3 rise 2
server api3 10.0.0.12:8080 weight 1 maxconn 500 check inter 2s fall 3 rise 2
Validate and reload:
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl reload haproxy
Check the runtime API:
echo "show info" | sudo socat stdio /run/haproxy/admin.sock | grep -E "Name|Maxconn|CurrConns"
echo "show servers state" | sudo socat stdio /run/haproxy/admin.sock
Summary
HAProxy is the gold standard for TCP and HTTP load balancing in Linux environments. Key takeaways from this guide:
- Use
maxconn,nbthread, and timeout tuning in theglobalanddefaultssections before anything else - Build ACLs in the frontend with
path_beg,hdr(), andsrcmatchers; route to named backends viause_backend - Choose
leastconnfor HTTP APIs,roundrobinfor uniform traffic, andsourceor stick tables for session persistence - Always add
check inter 2s fall 3 rise 2to every server line andoption httpchkto HTTP backends - Terminate SSL in the frontend with a combined PEM file; put multiple certs in one directory for automatic SNI
- Use stick tables for both session persistence (cookie or IP-based) and rate limiting without external dependencies
- The built-in stats page at
/statsgives you a real-time view of backend health and connection counts - Pair HAProxy with keepalived VRRP for sub-second failover between two nodes using a shared virtual IP