TL;DR

  • Scenario: Using Guava Cache for local caching in high-concurrency Java services, while needing to control refresh latency and memory usage
  • Conclusion: Reasonably set concurrencyLevel + refreshAfterWrite, and understand LoadingCache’s single-key locking semantics
  • Output: Concurrency parameter selection guide, refresh blocking behavior understanding framework

Version Matrix

Component / CapabilityVersion / RangeVerified
JDK 81.8.xYes
JDK 1111.xYes
JDK 1717.xYes
Guava Cache23.x–32.x+Yes

Concurrency Settings

Concurrency Level Parameter Details

  1. concurrencyLevel specifies the number of segment locks used internally by the cache (default value is 4)
  2. Each segment independently manages a portion of cache entries, different segments can operate concurrently
  3. Reasonably setting this value can significantly reduce thread competition (recommended to set to 1.5x estimated concurrent thread count)

Underlying Implementation Principle

Uses striped locking technology to divide the entire cache into multiple Segments:

Cache<String, Object> cache = CacheBuilder.newBuilder()
    .concurrencyLevel(8)  // Set to 8 segments
    .maximumSize(1000)
    .build();

Performance Optimization Suggestions

  • Low concurrency scenario (<4 threads): Use default value
  • Medium concurrency (4-16 threads): Recommended to set to 8-16
  • High concurrency scenario (>16 threads): Need to adjust based on actual pressure testing

Update Locking

Guava Cache provides refreshAfterWrite configuration item for timed data refresh:

  1. Background will start an async thread to fetch latest data from source
  2. During refresh, all requests for that cache entry will be blocked, default blocking time is 1 minute
  3. Only one request will actually execute the source fetch during refresh
CacheBuilder.newBuilder()
    .refreshAfterWrite(5, TimeUnit.MINUTES)
    .build(new CacheLoader<String, Object>() {
        @Override
        public Object load(String key) throws Exception {
            return fetchDataFromDB(key);
        }
    });

Note:

  • Unlike expireAfterWrite, refreshAfterWrite does not automatically remove expired data

Custom LRU

class LRUCache<K,V> extends LinkedHashMap<K, V> {
    private final int limit;

    public LRUCache(int limit) {
        super(limit, 0.75f, true);
        this.limit = limit;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() > limit;
    }
}

Error Quick Reference

SymptomRoot CauseFix
Cache hit rate deteriorates as concurrent threads increaseconcurrencyLevel set too highKeep concurrencyLevel near estimated concurrent thread count
Believing expired data will be automatically cleared after configuring refreshAfterWriteConfusing refreshAfterWrite with expireAfterWriteAlso configure expireAfterWrite or explicit invalidate
Read blocked during peak hoursLoad/source logic too slowOptimize source logic, add timeout and degradation