Lab 16: Redis Caching Patterns

Time: 40 minutes | Level: Practitioner | DB: Redis 7

Caching is the most common Redis use case — it dramatically reduces database load and response latency. But naive caching creates subtle bugs. This lab covers the four canonical patterns and their trade-offs.


Step 1 — Setup and Baseline

docker run -d --name redis-lab redis:7
sleep 2
docker exec -it redis-lab redis-cli

FLUSHALL
PING
# PONG

Why Cache?

  • Database query: 5-50ms

  • Redis GET: 0.1-0.5ms

  • 10x-100x faster reads

  • Reduces database connection pressure


Step 2 — Cache-Aside (Lazy Loading)

The most common pattern. Application checks cache first; on miss, reads from database and populates cache.

📸 Verified Output:

Application code (Python pseudocode):

💡 Cache-aside means the application manages the cache. On cache miss, it reads from DB and writes to cache. On update, it invalidates the cache key (don't update cache — let it expire or delete it).


Step 3 — Write-Through: Sync Cache and DB

Always write to both cache and database simultaneously.

Comparison:

Pattern
Read
Write
Consistency
Use Case

Cache-aside

Check cache → DB on miss

Invalidate cache

Eventual

General reads

Write-through

Always cached

Write cache + DB

Strong

Frequently read

Write-behind

Always cached

Write cache; async DB

Eventual

Write-heavy


Step 4 — TTL Management: EXPIRE, EXPIREAT, TTL

📸 Verified Output:


Step 5 — LRU Eviction Policy

When Redis memory is full, it evicts keys based on the configured policy.

Eviction policies:

Policy
What Gets Evicted

noeviction

Error on writes (default)

allkeys-lru

Any key, least recently used

volatile-lru

Keys with TTL, LRU

allkeys-lfu

Any key, least frequently used

volatile-lfu

Keys with TTL, LFU

allkeys-random

Random key

volatile-random

Random key with TTL

volatile-ttl

Key with smallest TTL

💡 For a pure cache, use allkeys-lru — Redis will evict the least recently used key when memory fills up. For mixed use (cache + persistent data), use volatile-lru to protect non-TTL keys.


Step 6 — Cache Stampede Prevention with NX Lock

When a popular cache key expires, hundreds of requests simultaneously hit the database — the "thundering herd" problem.

📸 Verified Output (NX lock):


Step 7 — SCAN for Key Enumeration

Never use KEYS * in production — it blocks Redis for the scan duration. Use SCAN instead.

📸 Verified SCAN:


Step 8 — Capstone: Cache Layer for API


Summary

Pattern
Consistency
Write Latency
When to Use

Cache-aside

Eventual

Fast (skip cache)

General read-heavy workloads

Write-through

Strong

Slow (write both)

Read-heavy, consistency critical

Write-behind

Eventual

Fast (write cache only)

Write-heavy, tolerance for data loss

Read-through

Strong

Slow on first read

Transparent cache layer

Stampede lock (NX)

N/A

N/A

Expensive cache rebuilds

LRU eviction

N/A

N/A

Memory pressure management

Last updated