TL;DR
- Scenario: High concurrency read-heavy business, database can’t handle it, need to improve throughput and stability
- Conclusion: Local cache for ultimate read performance, distributed cache for sharing and scaling, multi-level cache balances consistency and cost
- Output: Comparable version matrix and error quick reference
Version Matrix
| Component/Capability | Version/Year | Verified | Description |
|---|---|---|---|
| Local cache: Guava Cache | 32.x (2025) | Yes | Suitable for high-frequency read, low-change data |
| Local cache: Ehcache | 3.x (2025) | No | Overview only |
| Distributed cache: Redis | 7.2 (2024-2025) | Yes | Covers bypass穿透/write-back and distributed locks |
| Session: Spring Session + Redis | 3.x (2025) | Partial | Solution mentioned |
| Distributed lock implementation | SETNX + expiry | Yes | For more robust, evaluate Redisson |
Cache Use Cases
Use Cases
-
Database Storage Optimization Solutions
- Database sharding: When a single table exceeds 5 million records, query performance significantly degrades
- Read-write separation: Master handles writes, multiple slaves share read requests
-
Key Role of Cache System
- Reduce database pressure: Hot data stored in Redis and other in-memory databases, TPS can increase from 2000 to 50000+
- Typical cache strategies: Cache Aside, Read/Write Through, Write Behind
-
Performance Comparison Data
- MySQL single-machine QPS: ~2000-4000
- Redis single-machine QPS: ~100,000
- Cache hit rate recommendation: maintain between 80%-95%
Local Cache
Concept and Definition
Local cache refers to caching mechanism that stores data in the memory of the application server, accessing data directly through memory.
Common Implementation Methods
-
Basic Implementation:
ConcurrentHashMap<String, Object> cache = new ConcurrentHashMap<>(); cache.put("key", "value"); -
Professional Cache Frameworks:
- Guava Cache: Lightweight cache tool provided by Google, with auto-loading, expiration policies, cache eviction listeners
Performance Advantages
- Fast Access: Memory-level access speed (nanosecond), no network I/O overhead
- Reduced External Dependencies: Fewer calls to remote cache services
Limitations
- Capacity Limit: Constrained by JVM heap memory size
- Consistency Issues: Difficult data synchronization in cluster environments
- Feature Limitations: Lacks professional persistence mechanisms
Distributed Cache
Mainstream Distributed Cache Systems
- Redis: Supports multiple data structures, provides persistence
- Memcached: Simple and efficient key-value storage
- Tair: Distributed KV storage system developed by Alibaba and Meituan
Guava Cache
Core Features
- Auto-loading Mechanism: Supports auto-loading values from data source on cache miss
- Multiple Cache Eviction Strategies:
- Based on capacity: Evict when cache item count exceeds specified value
- Based on time: Access expiration (expireAfterAccess) / Write expiration (expireAfterWrite)
- Based on reference: Use weak or soft references to store keys or values
- High-performance Concurrency Support: Uses concurrent design similar to ConcurrentHashMap
Basic Usage Example
LoadingCache<Key, Value> cache = CacheBuilder.newBuilder()
.maximumSize(1000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.recordStats()
.build(new CacheLoader<Key, Value>() {
public Value load(Key key) throws Exception {
return createExpensiveValue(key);
}
});
Error Quick Reference
| Symptom | Root Cause | Fix |
|---|---|---|
| DB/Redis jitter and RT spike after traffic surge | Cache penetration | Hot key logical expiration + mutex rebuild + preheat |
| Low hit rate, backend overwhelmed | Cache penetration | Bloom filter/negative cache, parameter validation, rate limiting |
| Large-scale timeout concurrent errors | Cache avalanche | TTL with random jitter, batch expiration, hot data never expires |
| Some nodes data inconsistent | Local cache and distributed cache expiration out of sync | Invalidation broadcast, version stamp, shorten TTL |
| Frequent Full GC, application stalling | Local cache capacity too large | Limit maximumSize/weight |