Lab 02: HAProxy — Load Balancing
Time: 45 minutes | Level: Architect | Docker: docker run -it --rm --privileged ubuntu:22.04 bash
Overview
HAProxy (High Availability Proxy) is the de-facto standard for TCP/HTTP load balancing on Linux. It powers major cloud platforms and handles millions of connections per second. In this lab you will install HAProxy, master its configuration syntax, deploy multiple load balancing algorithms, configure health checks, enable ACLs, and validate a working load-balanced setup.
Learning Objectives:
Install and configure HAProxy 2.4
Understand frontend/backend/listen/defaults sections
Apply balance algorithms: roundrobin, leastconn, source
Configure health checks with
check inter rise fallUse ACLs for path-based routing
Enable the HAProxy stats page
Understand SSL termination architecture
Step 1: Install HAProxy
apt-get update
apt-get install -y haproxy curl python3Verify installation:
📸 Verified Output:
Examine the default configuration:
📸 Verified Output:
💡 Tip: HAProxy's
chroot /var/lib/haproxysandboxes the process for security. Thedaemonkeyword makes it run in the background. In Docker or containers, you may want to removedaemonand run in foreground mode.
Step 2: HAProxy Configuration Structure
HAProxy configuration has four main sections:
Configuration hierarchy:
Step 3: Start Backend Servers
📸 Verified Output:
Step 4: Write a Production HAProxy Configuration
📸 Verified Output:
💡 Tip:
check inter 2s rise 2 fall 3means: check every 2 seconds, mark server UP after 2 consecutive successes, mark DOWN after 3 consecutive failures. Tunefallconservatively — too low causes flapping.
Step 5: Validate and Start HAProxy
📸 Verified Output:
📸 Verified Output:
💡 Tip: Always run
haproxy -c -f /etc/haproxy/haproxy.cfgbefore applying a new configuration. In production, use graceful reload:haproxy -f /etc/haproxy/haproxy.cfg -sf $(cat /var/run/haproxy.pid)to avoid dropping existing connections.
Step 6: Test Load Balancing
📸 Verified Output:
📸 Verified Output:
📸 Verified Output:
Step 7: Load Balancing Algorithms & SSL Termination
Balance Algorithms Comparison:
roundrobin
Stateless apps
No
Equal distribution, weight-aware
leastconn
Long-lived connections (DB, WebSocket)
No
Routes to server with fewest active connections
source
Session-dependent apps (legacy)
Yes
IP hash — same client → same server
uri
Caching proxies
No
Same URL → same backend server
hdr(name)
Header-based routing
Varies
Routes based on HTTP header value
random
General purpose
No
Random weighted selection
SSL Termination Architecture:
💡 Tip: With SSL termination, backends receive plain HTTP. Add
option forwardforand setX-Forwarded-Proto: httpsso application code knows the original protocol. Usessl-passthroughif you need end-to-end encryption (HAProxy cannot inspect encrypted traffic in passthrough mode).
Step 8: Capstone — Production Load Balancer Design
Scenario: Design an HAProxy configuration for a microservices platform with:
Main web app (port 80 → 3 backend servers)
REST API (path
/api/*→ dedicated pool)WebSocket endpoint (
/ws/*→ sticky backend)Admin interface (source IP restricted)
Stats monitoring
Rate limiting headers
📸 Verified Output:
💡 Tip: HAProxy's
weightparameter (0-256) adjusts server capacity. A server withweight 50gets half the traffic ofweight 100. Setweight 0on a server to gracefully drain it without removing it from config.
Summary
Frontend binding
bind *:80
Where HAProxy listens
Default backend
default_backend
Catch-all backend
Conditional routing
use_backend ... if
ACL-based routing
Path ACL
acl x path_beg /api
Match URL path prefix
Round robin
balance roundrobin
Equal distribution
Least connections
balance leastconn
Long-lived connections
Source hash
balance source
IP-based sticky sessions
Health checks
check inter 2s rise 2 fall 3
Active server health
Stats page
stats enable + stats uri
Monitoring UI
SSL termination
bind *:443 ssl crt
HTTPS offloading
Config validation
haproxy -c -f
Syntax check before reload
Graceful reload
haproxy -sf $(pidof haproxy)
Zero-downtime config update
Last updated
