Knowledge

Caching Patterns in Production: Redis, CDN and Application-level

Multi-level caching strategies can reduce latency, infrastructure costs, and database load. Learn patterns, trade-offs, and implementation practices.

3/14/20269 min readKnowledge
Caching Patterns in Production: Redis, CDN and Application-level

Executive summary

Multi-level caching strategies can reduce latency, infrastructure costs, and database load. Learn patterns, trade-offs, and implementation practices.

Last updated: 3/14/2026

Introduction: Caching as Architecture, Not Optimization

Caching is often treated as a late-stage optimization — something to add when the system gets slow. In 2026, mature caching is first-class architecture. Systems at scale operate with multiple cache layers: CDN at the edge, application-level in-memory cache, Redis as distributed cache, and even database query cache.

The difference between systems that scale and those that collapse under load isn't just processing power — it's the ability to serve requests from memory, not disk or network.

Each cache layer serves a specific purpose: CDN for global static content, application cache for frequently accessed data on the same instance, Redis for sharing between instances, and query cache to optimize database access.

Fundamental Caching Patterns

Cache-aside (Lazy loading)

The most common pattern, where the application loads data into cache on demand:

typescriptinterface CacheAsideService {
  get<T>(key: string): Promise<T | null>;
  set<T>(key: string, value: T, ttl?: number): Promise<void>;
  delete(key: string): Promise<void>;
}

async function getUser(cache: CacheAsideService, db: Database, userId: string): Promise<User> {
  // 1. Try to get from cache
  const cached = await cache.get<User>(`user:${userId}`);

  if (cached) {
    return cached; // Cache hit
  }

  // 2. Cache miss: fetch from database
  const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);

  // 3. Store in cache for future requests
  await cache.set(`user:${userId}`, user, 3600); // 1 hour TTL

  return user;
}

Advantages:

  • Only accessed data gets cached
  • Simple implementation
  • Cache is self-populating

Disadvantages:

  • First request after cache invalidation is always a miss
  • Cache stampede possible when many clients try to populate the same item simultaneously

Cache stampede solution:

typescriptasync function getUserWithLock(cache: CacheAsideService, db: Database, userId: string): Promise<User> {
  const cached = await cache.get<User>(`user:${userId}`);

  if (cached) return cached;

  // Acquire lock to avoid cache stampede
  const lockKey = `lock:user:${userId}`;
  const lockAcquired = await cache.setIfNotExists(lockKey, 'locked', 30);

  if (lockAcquired) {
    try {
      // Only lock holder populates cache
      const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
      await cache.set(`user:${userId}`, user, 3600);
      return user;
    } finally {
      await cache.delete(lockKey);
    }
  } else {
    // Wait and retry cache lookup
    await sleep(100);
    return getUserWithLock(cache, db, userId);
  }
}

Write-through

Data is written to both cache and persistent storage simultaneously:

typescriptasync function updateUser(cache: CacheAsideService, db: Database, userId: string, data: Partial<User>): Promise<User> {
  // Update database
  const updated = await db.query(
    'UPDATE users SET ... WHERE id = $1 RETURNING *',
    [...data, userId]
  );

  // Update cache immediately
  await cache.set(`user:${userId}`, updated, 3600);

  return updated;
}

Advantages:

  • Cache always consistent with database
  • Subsequent cache hits always return updated data
  • Simple to implement

Disadvantages:

  • Write operations are slower (two writes)
  • Writes that fail in database but succeed in cache cause inconsistency

When to use: Frequently read, occasionally written data (profiles, configs, lookup tables).

Write-behind

Data is written to cache immediately and to database asynchronously:

typescriptclass WriteBehindService {
  private writeQueue: Queue<WriteOperation>;

  async write(key: string, value: any): Promise<void> {
    // Synchronous cache write
    await this.cache.set(key, value);

    // Enqueue database write
    await this.writeQueue.add({ key, value, timestamp: Date.now() });
  }

  constructor() {
    // Process queue asynchronously
    this.writeQueue.process(async (op) => {
      await this.db.query('INSERT INTO data_store (key, value) VALUES ($1, $2)', [op.key, op.value]);
    });
  }
}

Advantages:

  • Writes are extremely fast
  • High write throughput
  • Batches of writes can be consolidated

Disadvantages:

  • Risk of data loss if cache fails before persisting
  • Implementation complexity increases significantly
  • Need durable persistence mechanism

When to use: Logs, analytics, counters, non-critical data that can tolerate temporary loss.

Cache Invalidation Strategies

Time-based expiration (TTL)

The simplest form of invalidation:

typescript// Data with different update profiles
const CACHE_TTLS = {
  static: 86400,      // 24h: rarely changing content
  user: 3600,         // 1h: user profiles change occasionally
  session: 1800,       // 30m: sessions expire naturally
  realtime: 60,        // 1m: data that needs to be near real-time
  volatile: 10,        // 10s: frequently changing data
};

Disadvantages: Stale data may be served until TTL expires.

Event-based invalidation

Invalidate cache when data changes:

typescriptclass CacheInvalidator {
  async invalidateUser(userId: string): Promise<void> {
    // Direct cache invalidation
    await this.cache.delete(`user:${userId}`);

    // Invalidate derived caches
    await this.cache.deletePattern(`user:${userId}:*`);

    // Emit event for other services
    await this.eventBus.publish('user.invalidated', { userId });
  }
}

Challenge: Operation order must be consistent:

typescript// WRONG: Cache can be populated with stale data
async function updateUserData(userId: string, data: any) {
  await this.cache.invalidate(userId);
  await this.db.update(userId, data);
}

// CORRECT: First persist, then invalidate
async function updateUserData(userId: string, data: any) {
  await this.db.update(userId, data);
  await this.cache.invalidate(userId);
}

Version-based caching

Use versions to avoid race conditions:

typescriptinterface VersionedCache {
  get<T>(key: string, version: number): Promise<T | null>;
  set<T>(key: string, version: number, value: T): Promise<void>;
  incrementVersion(key: string): Promise<number>;
}

async function getUser(cache: VersionedCache, db: Database, userId: string): Promise<User> {
  const currentVersion = await cache.getCurrentVersion(`user:${userId}`);
  const cached = await cache.get(`user:${userId}`, currentVersion);

  if (cached) return cached;

  const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
  await cache.set(`user:${userId}`, currentVersion, user);

  return user;
}

async function invalidateUser(cache: VersionedCache, userId: string): Promise<void> {
  await cache.incrementVersion(`user:${userId}`);
}

Multi-level Caching Architecture

Level 1: CDN (Content Delivery Network)

For static and semi-static content:

┌─────────────────────────────────────────────────────────────────────────┐
│                           CDN LAYER                                │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Browser → Edge Location (CDN) → Origin                          │
│                                                                 │
│  Cached: CSS, JS, Images, Fonts, API Responses                    │
│  TTL: 1h - 24h (configurable per resource)                       │
│  Invalidation: Manual purge or versioned URLs                      │
│                                                                 │
└─────────────────────────────────────────────────────────────────────────┘

Best practices:

  • Version asset URLs (app.v2.js instead of app.js)
  • Configure appropriate cache headers
  • Use strategic cache key purging for urgent updates
nginx# Nginx config for API response caching
location /api/public/ {
  proxy_pass http://backend;
  proxy_cache api_cache;
  proxy_cache_valid 200 60m;  # Cache 200 responses for 60 minutes
  proxy_cache_bypass $http_cache_control;  # Respect client no-cache
  add_header X-Cache-Status $upstream_cache_status;
}

Level 2: Application-level cache (in-memory)

Local cache on the same application instance:

typescriptclass ApplicationCache {
  private cache = new Map<string, { value: any; expires: number }>();

  get<T>(key: string): T | null {
    const entry = this.cache.get(key);

    if (!entry) return null;
    if (Date.now() > entry.expires) {
      this.cache.delete(key);
      return null;
    }

    return entry.value as T;
  }

  set<T>(key: string, value: T, ttlSeconds: number): void {
    this.cache.set(key, {
      value,
      expires: Date.now() + ttlSeconds * 1000,
    });
  }

  delete(key: string): void {
    this.cache.delete(key);
  }
}

Advantages:

  • Extremely fast (no network latency)
  • No additional infrastructure
  • Ideal for frequently accessed data on same instance

Disadvantages:

  • Data not shared between instances
  • Each instance populates its cache, wasting resources
  • Cache doesn't persist between restarts

When to use: Configurations, small lookup tables, computationally expensive results.

Level 3: Distributed cache (Redis)

Cache shared across all instances:

typescriptimport { Redis } from 'ioredis';

class RedisCache {
  constructor(private redis: Redis) {}

  async get<T>(key: string): Promise<T | null> {
    const value = await this.redis.get(key);
    return value ? JSON.parse(value) : null;
  }

  async set<T>(key: string, value: T, ttl?: number): Promise<void> {
    const serialized = JSON.stringify(value);
    if (ttl) {
      await this.redis.setex(key, ttl, serialized);
    } else {
      await this.redis.set(key, serialized);
    }
  }

  async delete(key: string): Promise<void> {
    await this.redis.del(key);
  }

  async deletePattern(pattern: string): Promise<void> {
    const keys = await this.redis.keys(pattern);
    if (keys.length > 0) {
      await this.redis.del(...keys);
    }
  }
}

Advantages:

  • Shared cache across instances
  • Configurable persistence
  • Support for advanced data structures (sets, sorted sets, streams)
  • Pub/sub for distributed invalidation

Disadvantages:

  • Added network latency
  • SPOF if not configured for high availability
  • Additional operational cost

When to use: Frequently accessed data, user profiles, sessions, expensive query results.

Level 4: Database query cache

Cache of queries in the database itself:

sql-- PostgreSQL query cache
SET work_mem = '256MB';
SET shared_buffers = '2GB';
SET effective_cache_size = '8GB';

-- Materialized views for complex queries
CREATE MATERIALIZED VIEW user_stats AS
SELECT user_id, COUNT(*) as total_orders, SUM(amount) as total_spent
FROM orders
GROUP BY user_id;

-- Refresh at regular intervals
REFRESH MATERIALIZED VIEW CONCURRENTLY user_stats;

Advanced Caching Patterns

Read-through cache

Cache that automatically populates on miss:

typescriptclass ReadThroughCache {
  constructor(
    private cache: RedisCache,
    private loader: (key: string) => Promise<any>
  ) {}

  async get(key: string, ttl: number = 3600): Promise<any> {
    let value = await this.cache.get(key);

    if (!value) {
      value = await this.loader(key);
      await this.cache.set(key, value, ttl);
    }

    return value;
  }
}

const userCache = new ReadThroughCache(redisCache, async (userId) => {
  return await db.query('SELECT * FROM users WHERE id = $1', [userId]);
});

Refresh-ahead (proactive loading)

Refresh cache before expiration:

typescriptclass RefreshAheadCache {
  async get(key: string, ttl: number): Promise<any> {
    let value = await this.cache.get(key);

    if (!value) {
      // Cache miss: load from loader
      value = await this.loader(key);
      await this.cache.set(key, value, ttl);
    } else {
      // Cache hit: check if near expiration
      const ttlRemaining = await this.cache.ttl(key);
      const refreshThreshold = ttl * 0.9; // Refresh when 90% of TTL passed

      if (ttlRemaining < refreshThreshold) {
        // Async refresh
        this.refreshInBackground(key);
      }
    }

    return value;
  }

  private async refreshInBackground(key: string): Promise<void> {
    const value = await this.loader(key);
    await this.cache.set(key, value, this.ttl);
  }
}

Cache warming

Proactively populate cache during startup or low-demand periods:

typescriptclass CacheWarmer {
  async warmUserCache(userIds: string[]): Promise<void> {
    for (const userId of userIds) {
      const user = await this.db.query('SELECT * FROM users WHERE id = $1', [userId]);
      await this.cache.set(`user:${userId}`, user, 3600);
    }
  }

  // Execute during application startup
  async onStartup(): Promise<void> {
    const activeUsers = await this.db.query(
      'SELECT id FROM users WHERE last_active > NOW() - INTERVAL 7 days'
    );
    await this.warmUserCache(activeUsers.map(u => u.id));
  }
}

Metrics and Monitoring

Essential metrics

typescriptinterface CacheMetrics {
  // Hit/miss rates
  hitRate: number;           // cache_hits / (cache_hits + cache_misses)
  missRate: number;          // cache_misses / (cache_hits + cache_misses)

  // Latency
  avgHitLatency: number;     // Average response time on cache hit
  avgMissLatency: number;    // Average response time on cache miss

  // Evictions and size
  evictions: number;         // Items evicted by TTL or memory
  memoryUsage: number;      // Memory used by cache
  itemCount: number;         // Number of items in cache
}

Metrics implementation

typescriptclass InstrumentedCache {
  private metrics = {
    hits: 0,
    misses: 0,
    evictions: 0,
  };

  async get<T>(key: string): Promise<T | null> {
    const value = await this.cache.get<T>(key);

    if (value) {
      this.metrics.hits++;
      this.metrics.histogram('cache.hit.latency', latency);
    } else {
      this.metrics.misses++;
      this.metrics.histogram('cache.miss.latency', latency);
    }

    return value;
  }

  getMetrics(): CacheMetrics {
    const total = this.metrics.hits + this.metrics.misses;
    return {
      hitRate: this.metrics.hits / total,
      missRate: this.metrics.misses / total,
      evictions: this.metrics.evictions,
      // ...
    };
  }
}

Production targets:

  • Hit rate > 80% for frequently accessed data
  • Cache hit latency < 5ms
  • TTL evictions > 90% (avoid memory-based evictions)

30-day Implementation Plan

Week 1: Identify caching opportunities

  • Map most accessed endpoints
  • Identify slowest queries
  • Classify data by update profile

Week 2: Implement cache-aside on critical endpoints

  • Add Redis as distributed cache
  • Implement application cache for local data
  • Configure CDN for static content

Week 3: Refine invalidation strategies

  • Implement event-based invalidation
  • Add versioning to avoid race conditions
  • Configure cache warming for critical data

Week 4: Monitor and optimize

  • Implement cache metrics
  • Adjust TTLs based on access patterns
  • Adjust cache size based on utilization

Your application suffers from latency or high infrastructure costs? Talk to Imperialis specialists about multi-level caching strategies, performance architecture, and cost optimization at scale.

Sources

Related reading