This is article 50 in the Big Data series. This article introduces how to implement optimistic lock with WATCH/MULTI/EXEC in Redis, build distributed locks with SETNX, ensure atomicity of lock release with Lua scripts, and finally compare Redisson framework’s production-grade encapsulation.
Full illustrated version (with screenshots): CSDN Original | Juejin
Optimistic Lock Principle
Optimistic lock is based on CAS (Compare And Swap) idea: don’t block concurrent reads, instead check if data was modified by other threads when submitting write operation. Common approach is to record version number when reading, verify version before submitting, if consistent then update and increment version, otherwise retry.
Compared to pessimistic lock, optimistic lock has no lock-waiting overhead in read-heavy, write-light scenarios; drawback is high retry cost when write conflicts are frequent. E-commerce inventory deduction, seckill limited quantity are typical use cases.
| Dimension | Optimistic Lock | Pessimistic Lock |
|---|---|---|
| Assumption | Low conflict probability | High conflict probability |
| Implementation | Version number/CAS | Mutual exclusion lock |
| Waiting | Retry on failure, no blocking | Block waiting for lock release |
| Suitable | Read-heavy, write-light | Write-heavy, fierce competition |
Redis WATCH for Optimistic Lock
WATCH command monitors one or multiple keys. Before MULTI/EXEC transaction executes, if any monitored key is modified by another client, EXEC returns nil indicating transaction aborted, need to retry.
Execution flow:
WATCH key— Start monitoringGET key— Read current value for client computationMULTI— Start transaction queue- Enqueue modification commands (like
INCR key) EXEC— Submit; if key unchanged then succeeds, otherwise returnsnilneed retry
The following Java example uses 20 threads to simulate 300 concurrent increments, demonstrating optimistic lock preventing overselling:
public class Test02 {
public static void main(String[] args) {
String redisKey = "lock";
ExecutorService executor = Executors.newFixedThreadPool(20);
Jedis jedis = new Jedis("h121.wzk.icu", 6379);
jedis.del(redisKey);
jedis.set(redisKey, "0");
for (int i = 0; i < 300; i++) {
executor.execute(() -> {
Jedis j = new Jedis("h121.wzk.icu", 6379);
try {
j.watch(redisKey);
int value = Integer.parseInt(j.get(redisKey));
if (value < 20) {
Transaction tx = j.multi();
tx.incr(redisKey);
List<Object> list = tx.exec();
if (list != null && !list.isEmpty()) {
System.out.println("Successfully grabbed: " + (value + 1));
}
}
} finally {
j.close();
}
});
}
}
}
WATCH combined with MULTI/EXEC ensures “read-judge-write” consistency, but it’s optimistic — doesn’t block on conflict, suitable for business layer to control retry logic.
SETNX Distributed Lock
SETNX (SET if Not eXists) is Redis’s atomic operation: if key doesn’t exist, write and return 1; if already exists, return 0. Use this feature to implement cross-process mutual exclusion lock.
Recommended写法 (atomic set + expiration):
// One command sets NX and EX together, avoid non-atomic setnx + expire problem
String result = jedis.set(lockKey, requestId, "NX", "EX", expireTime);
if ("OK".equals(result)) {
// Lock acquired successfully
}
Early two-step approach (setnx then expire) had risk of lock never expiring due to crash. Modern Redis’s atomic SET key value NX EX seconds instruction solves this problem.
Must use Lua script to release lock atomically:
// First verify requestId then delete, prevent mistakenly deleting others' lock
String script = "if redis.call('get', KEYS[1]) == ARGV[1] then " +
"return redis.call('del', KEYS[1]) else return 0 end";
jedis.eval(script,
Collections.singletonList(lockKey),
Collections.singletonList(requestId));
Problem with simple DEL: If lock has timed out and is held by another client, current client’s DEL will delete others’ lock, creating race condition. Lua script executes atomically in Redis, completely avoids this problem.
Four properties that distributed locks must satisfy:
| Property | Description |
|---|---|
| Mutual exclusion | Only one client holds lock at any time |
| Ownership | Only locker can release lock |
| Reusability | Holder can acquire multiple times without deadlock |
| Fault tolerance | Auto-expiration prevents permanent blocking |
Redisson Framework
Hand-written SETNX lock is cumbersome for lock renewal, reentrant, Redlock scenarios. Redisson is production-grade solution.
Maven dependency:
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson</artifactId>
<version>2.7.0</version>
</dependency>
Connection configuration (cluster mode):
Config config = new Config();
config.useClusterServers()
.addNodeAddress("redis://h121.wzk.icu:6379")
.addNodeAddress("redis://h122.wzk.icu:6379");
Redisson redisson = Redisson.create(config);
Acquire / Release:
public static boolean acquire(String lockName) {
RLock rLock = redisson.getLock("redisLock_" + lockName);
rLock.lock(3, TimeUnit.SECONDS); // Hold lock for max 3 seconds
return true;
}
public static void release(String lockName) {
RLock rLock = redisson.getLock("redisLock_" + lockName);
rLock.unlock();
}
Redisson internally implements acquire/renew/release atomic operations through Lua scripts, and supports WatchDog auto-renewal — checks every lockWatchdogTimeout / 3 (default 10 seconds) if business is still executing, if so automatically extends lock expiration time, completely solving “business not finished, lock already expired” problem.
Scheme Comparison and Selection
| Scheme | Performance | Reliability | Implementation Complexity |
|---|---|---|---|
| Redis WATCH Optimistic Lock | High | Medium | Low |
| Redis SETNX | Very High | Medium | Low |
| Redis Lua Atomic Release | High | High | Medium |
| Redisson | High | High | Low (framework encapsulated) |
| ZooKeeper | Lower | Very High | High |
Selection suggestions: For simple deduplication scenarios use SETNX + Lua release; for production high concurrency requiring reentrant auto-renewal, prefer Redisson; for read-heavy, write-light, low conflict probability scenarios, use WATCH optimistic lock.