Lab 12: Caching Strategies

Time: 60 minutes | Level: Architect | Docker: docker run -it --rm node:20-alpine sh

Caching is the highest-leverage performance optimization. This lab implements an LRU cache from scratch, covers Redis patterns with ioredis, and builds a consistent hashing ring for cache sharding.


Step 1: LRU Cache — Theory

LRU (Least Recently Used): evict the item that was accessed longest ago.

Data structure: Map + Doubly-Linked List

  • Map: O(1) key lookup

  • DLL: O(1) move-to-front (on access) and eviction (from tail)

HEAD ↔ [most recent] ↔ ... ↔ [least recent] ↔ TAIL
         ↑                                        ↑
     new access                               evict this

Step 2: LRU Cache Implementation

// file: lru-cache.js
class LRUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.map = new Map();

    // Sentinel nodes (dummy head/tail for simpler logic)
    this.head = { key: null, val: null, prev: null, next: null };
    this.tail = { key: null, val: null, prev: null, next: null };
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }

  _remove(node) {
    node.prev.next = node.next;
    node.next.prev = node.prev;
  }

  _addToFront(node) {
    node.next = this.head.next;
    node.prev = this.head;
    this.head.next.prev = node;
    this.head.next = node;
  }

  get(key) {
    if (!this.map.has(key)) return -1;
    const node = this.map.get(key);
    this._remove(node);
    this._addToFront(node); // promote to most-recently-used
    return node.val;
  }

  put(key, val) {
    if (this.map.has(key)) {
      this._remove(this.map.get(key));
    } else if (this.map.size >= this.capacity) {
      // Evict LRU (tail.prev)
      const lru = this.tail.prev;
      this._remove(lru);
      this.map.delete(lru.key);
      console.log(`  Evicted: key="${lru.key}"`);
    }
    const node = { key, val, prev: null, next: null };
    this._addToFront(node);
    this.map.set(key, node);
  }

  size() { return this.map.size; }

  toArray() {
    const result = [];
    let cur = this.head.next;
    while (cur !== this.tail) { result.push(cur.key); cur = cur.next; }
    return result; // most → least recent
  }
}

// Demo
const cache = new LRUCache(3);
cache.put('a', 1);
cache.put('b', 2);
cache.put('c', 3);
cache.get('a');      // promote 'a' to front
cache.put('d', 4);   // evict 'b' (least recently used)

console.log('get b:', cache.get('b')); // -1 (evicted)
console.log('get a:', cache.get('a')); //  1
console.log('get d:', cache.get('d')); //  4
console.log('Order (most→least):', cache.toArray());

📸 Verified Output:


Step 3: TTL-Aware LRU Cache


Step 4: Redis Patterns with ioredis


Step 5: Cache-Aside, Write-Through, Write-Behind Patterns


Step 6: Stale-While-Revalidate


Step 7: Consistent Hashing for Cache Sharding


Step 8: Capstone — Production Cache Layer


Summary

Strategy
Consistency
Performance
Use Case

LRU Cache

Eventually

Very high

Hot data, session cache

Cache-Aside

Eventual

High

General read cache

Write-Through

Strong

Medium

Write-heavy, needs read consistency

Write-Behind

Eventual

Very high

High write throughput, eventual persistence

SWR

Eventual

Very high

APIs, dashboards, CDN content

Consistent Hashing

N/A

High

Distributed cache sharding

Redis pipeline

N/A

Very high

Batch operations, reduce RTT

Redis MULTI/EXEC

Atomic

High

Transactional updates

Last updated