This is article 47 in the Big Data series. This article provides an in-depth explanation of how Redis controls memory usage through key expiration management and eviction policies, ensuring stable service operation.

Full illustrated version (with screenshots): CSDN Original | Juejin

Redis Performance Benchmark

Under ideal conditions, Redis performance reference values:

  • Read operations: ~110,000 operations/second
  • Write operations: ~81,000 operations/second

Actual performance is affected by network latency, data structure complexity, and other factors. Behind this high performance, memory management is key to ensuring system stability.

maxmemory Configuration

Not Setting maxmemory (default value 0)

Redis doesn’t limit memory usage. When system physical memory is exhausted:

  • Default policy is noeviction, rejects all write operations and returns OOM error
  • Suitable for scenarios with predictable key count and data that must not be lost

Setting maxmemory

# Configure in redis.conf
maxmemory 1024mb

# Query at runtime
CONFIG GET maxmemory

When memory reaches the limit, Redis automatically evicts data according to maxmemory-policy.

Key Expiration Mechanism

Setting Expiration Time

EXPIRE key seconds       # Set second-level TTL
PEXPIRE key milliseconds # Set millisecond-level TTL
EXPIREAT key timestamp   # Expire at specified UNIX timestamp

TTL key   # Query remaining seconds (-1 means never expires, -2 means key doesn't exist)
PTTL key  # Query remaining milliseconds

Typical Application Scenarios for Expiration

ScenarioRecommended TTL
Verification code5-10 minutes
Login session30 minutes to several hours
Database query cacheSet based on business fluctuations
Distributed lockBusiness processing time + buffer
API rate limit counterTime window size

Three Deletion Strategies

Redis uses a lazy deletion + active periodic deletion combined strategy, balancing CPU and memory efficiency.

1. Lazy Deletion

After a key expires, do not delete immediately. Check on next access if it has expired; if so, delete and return empty value.

  • Advantages: Saves CPU, no unnecessary scanning
  • Disadvantages: Expired keys may occupy memory for a long time (especially keys never accessed)

2. Active Periodic Deletion

Redis periodically randomly samples a batch of keys with TTL, deleting expired entries. Frequency controlled by hz parameter (default 10 times per second).

  • Advantages: Can clean “zombie keys”, control memory growth
  • Disadvantages: Cannot guarantee all expired keys are cleaned in time

3. Scheduled Deletion

Create a timer for each key, delete immediately when expired. Since maintaining a timer for each key requires huge CPU overhead, Redis actually does not use this strategy.

Memory Eviction Policies

When memory reaches maxmemory limit and cannot apply for more memory, Redis decides how to evict data based on maxmemory-policy.

maxmemory-policy allkeys-lru

8 Policy Comparisons

PolicyEviction ScopeEviction AlgorithmDescription
noevictionReject write operations (default)
allkeys-lruAll keysLRUEvict least recently used keys
volatile-lruKeys with expirationLRULRU among keys with TTL
allkeys-lfuAll keysLFUEvict keys with lowest access frequency
volatile-lfuKeys with expirationLFULFU among keys with TTL
allkeys-randomAll keysRandomRandomly evict any key
volatile-randomKeys with expirationRandomRandomly evict keys with TTL
volatile-ttlKeys with expirationTTLPrioritize evicting keys with shortest remaining time

LRU Implementation Principle

Redis doesn’t maintain a strict LRU链表. Instead, it records the last access timestamp for each key (lru_clock, precision in seconds). During eviction, randomly sample several keys (controlled by maxmemory-samples), select the one with the largest LRU value (least recently accessed) for deletion. This is an approximate LRU that saves memory while achieving效果 close to exact LRU.

LFU Implementation Principle

LFU (Least Frequently Used) evicts keys with lowest access frequency, more resistant to batch access interference on hot key determination than LRU. Redis 6.0 introduces LFU, uses Morris Counter for probabilistic counting of access frequency, and adds time decay factor to prevent historical hot keys from never being evicted.

Policy Selection Recommendations

ScenarioRecommended Policy
Uncertain data access patternallkeys-lru (most general)
Obvious hot/cold data partitionallkeys-lru or allkeys-lfu
All data has equal importanceallkeys-random
Business has set reasonable TTL for keysvolatile-ttl
Cache data must not be lostnoeviction (needs alarm and扩容)

Production Practice Recommendations

  • Production environment must set maxmemory to avoid Redis OOM affecting other services on the host
  • Before enabling eviction, monitor used_memory and maxmemory_human with INFO memory
  • Combine business characteristics to set reasonable TTL for keys, reducing pressure on eviction algorithm
  • Increasing maxmemory-samples (e.g., to 10) can improve LRU accuracy, but will increase CPU overhead