TL;DR
- Scenario: Using Guava Cache for local caching in high-concurrency Java services, while needing to control refresh latency and memory usage
- Conclusion: Reasonably set concurrencyLevel + refreshAfterWrite, and understand LoadingCache’s single-key locking semantics
- Output: Concurrency parameter selection guide, refresh blocking behavior understanding framework
Version Matrix
| Component / Capability | Version / Range | Verified |
|---|---|---|
| JDK 8 | 1.8.x | Yes |
| JDK 11 | 11.x | Yes |
| JDK 17 | 17.x | Yes |
| Guava Cache | 23.x–32.x+ | Yes |
Concurrency Settings
Concurrency Level Parameter Details
- concurrencyLevel specifies the number of segment locks used internally by the cache (default value is 4)
- Each segment independently manages a portion of cache entries, different segments can operate concurrently
- Reasonably setting this value can significantly reduce thread competition (recommended to set to 1.5x estimated concurrent thread count)
Underlying Implementation Principle
Uses striped locking technology to divide the entire cache into multiple Segments:
Cache<String, Object> cache = CacheBuilder.newBuilder()
.concurrencyLevel(8) // Set to 8 segments
.maximumSize(1000)
.build();
Performance Optimization Suggestions
- Low concurrency scenario (<4 threads): Use default value
- Medium concurrency (4-16 threads): Recommended to set to 8-16
- High concurrency scenario (>16 threads): Need to adjust based on actual pressure testing
Update Locking
Guava Cache provides refreshAfterWrite configuration item for timed data refresh:
- Background will start an async thread to fetch latest data from source
- During refresh, all requests for that cache entry will be blocked, default blocking time is 1 minute
- Only one request will actually execute the source fetch during refresh
CacheBuilder.newBuilder()
.refreshAfterWrite(5, TimeUnit.MINUTES)
.build(new CacheLoader<String, Object>() {
@Override
public Object load(String key) throws Exception {
return fetchDataFromDB(key);
}
});
Note:
- Unlike expireAfterWrite, refreshAfterWrite does not automatically remove expired data
Custom LRU
class LRUCache<K,V> extends LinkedHashMap<K, V> {
private final int limit;
public LRUCache(int limit) {
super(limit, 0.75f, true);
this.limit = limit;
}
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > limit;
}
}
Error Quick Reference
| Symptom | Root Cause | Fix |
|---|---|---|
| Cache hit rate deteriorates as concurrent threads increase | concurrencyLevel set too high | Keep concurrencyLevel near estimated concurrent thread count |
| Believing expired data will be automatically cleared after configuring refreshAfterWrite | Confusing refreshAfterWrite with expireAfterWrite | Also configure expireAfterWrite or explicit invalidate |
| Read blocked during peak hours | Load/source logic too slow | Optimize source logic, add timeout and degradation |