This is article 47 in the Big Data series. This article provides an in-depth explanation of how Redis controls memory usage through key expiration management and eviction policies, ensuring stable service operation.
Full illustrated version (with screenshots): CSDN Original | Juejin
Redis Performance Benchmark
Under ideal conditions, Redis performance reference values:
- Read operations: ~110,000 operations/second
- Write operations: ~81,000 operations/second
Actual performance is affected by network latency, data structure complexity, and other factors. Behind this high performance, memory management is key to ensuring system stability.
maxmemory Configuration
Not Setting maxmemory (default value 0)
Redis doesn’t limit memory usage. When system physical memory is exhausted:
- Default policy is
noeviction, rejects all write operations and returns OOM error - Suitable for scenarios with predictable key count and data that must not be lost
Setting maxmemory
# Configure in redis.conf
maxmemory 1024mb
# Query at runtime
CONFIG GET maxmemory
When memory reaches the limit, Redis automatically evicts data according to maxmemory-policy.
Key Expiration Mechanism
Setting Expiration Time
EXPIRE key seconds # Set second-level TTL
PEXPIRE key milliseconds # Set millisecond-level TTL
EXPIREAT key timestamp # Expire at specified UNIX timestamp
TTL key # Query remaining seconds (-1 means never expires, -2 means key doesn't exist)
PTTL key # Query remaining milliseconds
Typical Application Scenarios for Expiration
| Scenario | Recommended TTL |
|---|---|
| Verification code | 5-10 minutes |
| Login session | 30 minutes to several hours |
| Database query cache | Set based on business fluctuations |
| Distributed lock | Business processing time + buffer |
| API rate limit counter | Time window size |
Three Deletion Strategies
Redis uses a lazy deletion + active periodic deletion combined strategy, balancing CPU and memory efficiency.
1. Lazy Deletion
After a key expires, do not delete immediately. Check on next access if it has expired; if so, delete and return empty value.
- Advantages: Saves CPU, no unnecessary scanning
- Disadvantages: Expired keys may occupy memory for a long time (especially keys never accessed)
2. Active Periodic Deletion
Redis periodically randomly samples a batch of keys with TTL, deleting expired entries. Frequency controlled by hz parameter (default 10 times per second).
- Advantages: Can clean “zombie keys”, control memory growth
- Disadvantages: Cannot guarantee all expired keys are cleaned in time
3. Scheduled Deletion
Create a timer for each key, delete immediately when expired. Since maintaining a timer for each key requires huge CPU overhead, Redis actually does not use this strategy.
Memory Eviction Policies
When memory reaches maxmemory limit and cannot apply for more memory, Redis decides how to evict data based on maxmemory-policy.
maxmemory-policy allkeys-lru
8 Policy Comparisons
| Policy | Eviction Scope | Eviction Algorithm | Description |
|---|---|---|---|
noeviction | — | — | Reject write operations (default) |
allkeys-lru | All keys | LRU | Evict least recently used keys |
volatile-lru | Keys with expiration | LRU | LRU among keys with TTL |
allkeys-lfu | All keys | LFU | Evict keys with lowest access frequency |
volatile-lfu | Keys with expiration | LFU | LFU among keys with TTL |
allkeys-random | All keys | Random | Randomly evict any key |
volatile-random | Keys with expiration | Random | Randomly evict keys with TTL |
volatile-ttl | Keys with expiration | TTL | Prioritize evicting keys with shortest remaining time |
LRU Implementation Principle
Redis doesn’t maintain a strict LRU链表. Instead, it records the last access timestamp for each key (lru_clock, precision in seconds). During eviction, randomly sample several keys (controlled by maxmemory-samples), select the one with the largest LRU value (least recently accessed) for deletion. This is an approximate LRU that saves memory while achieving效果 close to exact LRU.
LFU Implementation Principle
LFU (Least Frequently Used) evicts keys with lowest access frequency, more resistant to batch access interference on hot key determination than LRU. Redis 6.0 introduces LFU, uses Morris Counter for probabilistic counting of access frequency, and adds time decay factor to prevent historical hot keys from never being evicted.
Policy Selection Recommendations
| Scenario | Recommended Policy |
|---|---|
| Uncertain data access pattern | allkeys-lru (most general) |
| Obvious hot/cold data partition | allkeys-lru or allkeys-lfu |
| All data has equal importance | allkeys-random |
| Business has set reasonable TTL for keys | volatile-ttl |
| Cache data must not be lost | noeviction (needs alarm and扩容) |
Production Practice Recommendations
- Production environment must set
maxmemoryto avoid Redis OOM affecting other services on the host - Before enabling eviction, monitor
used_memoryandmaxmemory_humanwithINFO memory - Combine business characteristics to set reasonable TTL for keys, reducing pressure on eviction algorithm
- Increasing
maxmemory-samples(e.g., to 10) can improve LRU accuracy, but will increase CPU overhead