preface
Mainstream distributed locks are generally implemented in three ways:
- Database optimistic lock
- Redis based distributed lock
- Distributed lock based on ZooKeeper
Previously, I wrote on my blog about the specific scheme for mysql and redis to implement distributed locks:[ https://www.cnblogs.com/wang-
meng/p/10226618.html](https://links.jianshu.com/go?to=https%3A%2F%2Fwww.cnblogs.com%2Fwang-
Menu% 2fp% 2f10226618. HTML) mainly starts from the implementation principle.
This [distributed lock] series mainly focuses on the implementation principles of redis client redision source code and zk distributed locks.
reliability
First, in order to ensure the availability of distributed locks, we should at least ensure that the implementation of locks meets the following four conditions at the same time:
- Mutex. Only one client can hold a lock at any time.
- Deadlock will not occur. Even if one client crashes while holding the lock without actively unlocking, it can ensure that other subsequent clients can lock.
- It has fault tolerance. As long as most Redis nodes operate normally, the client can lock and unlock.
- The person who tied the bell must unlock the bell. Locking and unlocking must be the same client. The client cannot unlock the lock added by others.
Redisson locking principle
redisson is a very powerful open source redis client framework. Its official address is:
It's easy to use. Configure maven and connection information. Here you can see the code implementation directly:
RLock lock = redisson.getLock("anyLock");
lock.lock();
lock.unlock();
The specific implementation locking logic of reisson is completed through lua script, which can ensure atomicity.
First look at the RLock initialization code:
public class Redisson implements RedissonClient {
@Override
public RLock getLock(String name) {
return new RedissonLock(connectionManager.getCommandExecutor(), name);
}
}
public class RedissonLock extends RedissonExpirable implements RLock {
public RedissonLock(CommandAsyncExecutor commandExecutor, String name) {
super(commandExecutor, name);
this.commandExecutor = commandExecutor;
this.id = commandExecutor.getConnectionManager().getId();
this.internalLockLeaseTime = commandExecutor.getConnectionManager().getCfg().getLockWatchdogTimeout();
this.entryName = id + ":" + name;
}
First, look at the id of RedissonLock. It returns a UUID object. Each machine corresponds to its own id attribute, id
The value is similar to: "8743c9c0-0795-4907-87fd-6c719a6b4586"
Next, look back at the code implementation of lock():
public class RedissonLock extends RedissonExpirable implements RLock {
@Override
public void lock() {
try {
lockInterruptibly();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
@Override
public void lockInterruptibly() throws InterruptedException {
lockInterruptibly(-1, null);
}
@Override
public void lockInterruptibly(long leaseTime, TimeUnit unit) throws InterruptedException {
// Get current thread id
long threadId = Thread.currentThread().getId();
Long ttl = tryAcquire(leaseTime, unit, threadId);
// lock acquired
if (ttl == null) {
return;
}
RFuture<RedissonLockEntry> future = subscribe(threadId);
commandExecutor.syncSubscription(future);
try {
while (true) {
ttl = tryAcquire(leaseTime, unit, threadId);
// lock acquired
if (ttl == null) {
break;
}
// waiting for message
if (ttl >= 0) {
getEntry(threadId).getLatch().tryAcquire(ttl, TimeUnit.MILLISECONDS);
} else {
getEntry(threadId).getLatch().acquire();
}
}
} finally {
unsubscribe(future, threadId);
}
}
<T> RFuture<T> tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
internalLockLeaseTime = unit.toMillis(leaseTime);
return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command,
"if (redis.call('exists', KEYS[1]) == 0) then " +
"redis.call('hset', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
"if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
"redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
"return redis.call('pttl', KEYS[1]);",
Collections.<Object>singletonList(getName()), internalLockLeaseTime, getLockName(threadId));
}
}
Some intermediate codes are omitted here. Here we mainly look at tryAcquire()
Method. The expiration time passed here is - 1, then the current thread id, and then the core lua script execution process. Let's take a step-by-step look at how it is executed:
"if (redis.call('exists', KEYS[1]) == 0) then " +
"redis.call('hset', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
The KEYS[1] parameter is: "anyLock"
ARGV[2] is: "ID +": "+ ThreadID"
First, use exists to determine whether the current key exists in redis. If it does not exist, it is equal to 0. Then execute the hset instruction to "anyLock id:threadId"
"1" is stored in redis. The final data stored in redis is similar to:
{
"8743c9c0-0795-4907-87fd-6c719a6b4586:1":1
}
Secretly, the last 1 is for the count statistics of reentrant, which will be explained later.
Then look down, and then use pexpire to set the expiration time. By default, the internalLockLeaseTime is 30s. Finally, the return is null, and the locking is successful immediately.
Redisson reentrant principle
Let's look at how the same thread on the same machine locks when the lock key exists?
"if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
"redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return nil; " +
"end; " +
"return redis.call('pttl', KEYS[1]);",
ARGV[2] is: "ID +": "+ ThreadID"
If the same machine and thread request again, it will be 1, then execute hincrby, the value+1 set by hset becomes 2, and then continue to set the expiration time.
Similarly, after a thread re enters, value - 1 is unlocked
Redisson watchDog principle
If there is A scenario where A and B are executing business, A adds distributed locks, but the production environment changes. If A locks out, but A's business is still running. At this time, B gets the lock and B executes business logic because A locks out of time. In this way, distributed locks lose their meaning?
So Redisson introduced watch
According to the concept of dog, after A obtains the lock execution, if the lock does not expire, A background thread will automatically extend the expiration time of the lock to prevent the lock from expiring because the business is not completed.
Let's take a look at the specific implementation:
private <T> RFuture<Long> tryAcquireAsync(long leaseTime, TimeUnit unit, final long threadId) {
if (leaseTime != -1) {
return tryLockInnerAsync(leaseTime, unit, threadId, RedisCommands.EVAL_LONG);
}
RFuture<Long> ttlRemainingFuture = tryLockInnerAsync(commandExecutor.getConnectionManager().getCfg().getLockWatchdogTimeout(), TimeUnit.MILLISECONDS, threadId, RedisCommands.EVAL_LONG);
ttlRemainingFuture.addListener(new FutureListener<Long>() {
@Override
public void operationComplete(Future<Long> future) throws Exception {
if (!future.isSuccess()) {
return;
}
Long ttlRemaining = future.getNow();
// lock acquired
if (ttlRemaining == null) {
scheduleExpirationRenewal(threadId);
}
}
});
return ttlRemainingFuture;
}
After tryLockInnerAsync is executed, a listener will be added to see the specific implementation in the listener:
protected RFuture<Boolean> renewExpirationAsync(long threadId) {
return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
"if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
"redis.call('pexpire', KEYS[1], ARGV[1]); " +
"return 1; " +
"end; " +
"return 0;",
Collections.<Object>singletonList(getName()),
internalLockLeaseTime, getLockName(threadId));
}
The scheduling task is executed every 10 seconds. The lua script renews the expiration time so that the lock held by the current thread will not expire because the expiration time is up
image.png
Redisson's principle of mutual exclusion
Let's look at the lua script that executes locking. Finally, it will be executed to:
"return redis.call('pttl', KEYS[1]);",
How long does the return lock expire? Let's continue to look at the code:
@Override
public void lockInterruptibly(long leaseTime, TimeUnit unit) throws InterruptedException {
long threadId = Thread.currentThread().getId();
Long ttl = tryAcquire(leaseTime, unit, threadId);
// If ttl is returned, it indicates that locking is successful. If it is not empty, locking fails
if (ttl == null) {
return;
}
RFuture<RedissonLockEntry> future = subscribe(threadId);
commandExecutor.syncSubscription(future);
try {
// Loop to try to acquire lock
while (true) {
// Try locking again
ttl = tryAcquire(leaseTime, unit, threadId);
// If ttl=null, the lock preemption is successful
if (ttl == null) {
break;
}
// If ttl is greater than 0, lock preemption fails. This involves Semaphore, which will be explained later
if (ttl >= 0) {
getEntry(threadId).getLatch().tryAcquire(ttl, TimeUnit.MILLISECONDS);
} else {
getEntry(threadId).getLatch().acquire();
}
}
} finally {
unsubscribe(future, threadId);
}
}
Redisson lock release principle
Look directly at lua Code:
protected RFuture<Boolean> unlockInnerAsync(long threadId) {
return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
// Determine whether the lock key value exists
"if (redis.call('exists', KEYS[1]) == 0) then " +
"redis.call('publish', KEYS[2], ARGV[1]); " +
"return 1; " +
"end;" +
// Judge whether the key corresponding to the current machine and current thread id exists
"if (redis.call('hexists', KEYS[1], ARGV[3]) == 0) then " +
"return nil;" +
"end; " +
// Counter number - 1 reentrant lock
"local counter = redis.call('hincrby', KEYS[1], ARGV[3], -1); " +
// If the counter is greater than 0, the lock is still held
"if (counter > 0) then " +
"redis.call('pexpire', KEYS[1], ARGV[2]); " +
"return 0; " +
"else " +
// Delete the key using the del command
"redis.call('del', KEYS[1]); " +
"redis.call('publish', KEYS[2], ARGV[1]); " +
"return 1; "+
"end; " +
"return nil;",
Arrays.<Object>asList(getName(), getChannelName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(threadId));
}