Vi. explicit lock and AQS

Keywords: Java jvm JDK REST

Explicit locks and AQS

I. explicit lock

The Synchronized keyword combines with the monitor of the object. The JVM provides us with a kind of semantics of "built-in lock". This kind of lock is very simple. We don't need to care about the process of locking and releasing the lock. We just need to tell the virtual machine which code blocks need to be locked. Other details will be implemented by the compiler and the virtual machine themselves.

Our "built-in lock" can be understood as a built-in feature of JVM, so a significant problem is that it does not support customization of some advanced functions, for example, I want this lock to support fair competition, I want to block threads on different queues according to different conditions, I want to support timed competition lock, timeout return, and I want to be blocked. The thread can respond to interrupt requests and so on.

These special requirements cannot be met by the "built-in lock". Therefore, the concept of "explicit lock" is introduced at the JDK level. The JVM is no longer responsible for locking and releasing locks. These two actions are released to our program. The program level is inevitably more complex, but the lock is more flexible and can support more customization functions, but you are required to have a deeper understanding of locks.

[1] Lock

The Lock interface is located in the java.util.concurrent.locks package. Its basic definition is as follows:

public interface Lock {
    //Get lock, block if fail
    void lock();
    //Get lock in response to interrupt
    void lockInterruptibly()
    //Try to get the lock once, return true for success, false for failure, no blocking
    boolean tryLock();
    //Regular attempt
    boolean tryLock(long time, TimeUnit unit)
    //Release lock
    void unlock();
    //Create a condition queue
    Condition newCondition();
}

There are three main implementation classes of explicit Lock, ReentrantLock is the main implementation class, ReadLock and WriteLock are the two internal classes defined by ReentrantReadWriteLock. They inherit from Lock and implement all the methods defined by it, and refine the separation of reading and writing. ReentrantReadWriteLock provides read Lock and write Lock.

[2] reentrantlock

ReentrantLock, as the most basic implementation of Lock explicit Lock, is also the most frequently used Lock implementation class. Reentrant means that the Lock can be retrieved, such as recursive operation.

It provides two constructors to support fair play locks.

public ReentrantLock()
//Parameter fair is whether fair competition lock is supported
public ReentrantLock(boolean fair)

The difference between fair lock and unfair lock is that when selecting the next thread that owns the lock, fair lock refers to the first come first served principle, and the thread with longer waiting time will have higher priority. The non fair lock ignores this principle.

Let's assume that if A obtains the lock and B fails to acquire the lock and is blocked, then C also attempts to acquire the lock. Although C only needs A short running time, it still needs to wait for B to finish execution before it has the chance to acquire the lock and run.

Under the premise of unfair lock, the execution of A ends, finds the B thread at the head of the queue, and starts context switching. If the C comes here to compete for the lock at this time, under the premise of unfair strategy, C can obtain the lock, and assuming that its rapid execution ends, there will be no problem to acquire the lock after the B thread is switched back. As A result, the C thread is up and down the B thread. The execution ends during document switching. Obviously, under the unfair strategy, the CPU throughput is improved.

However, the lock of unfair strategy may cause some threads to starve and never run. Each has its own advantages and disadvantages, so it is a timely choice.

[3] readwritelock

① ReadWriteLock is also an interface like Lock. It provides two Lock operation mechanisms, readLock and writeLock. One is read-only Lock and the other is write Lock.

Read lock can be held by multiple threads at the same time when there is no write lock. The write lock is exclusive. There can only be one write thread at a time, but multiple threads can read data concurrently.

The implementation of all read and write locks must ensure the memory impact of the write operation on the read operation. In other words, a thread that has acquired a read lock must be able to see what was updated by the previously released write lock.

Theoretically, read-write locks allow a greater degree of concurrency for shared data than mutexes. Compared with the mutex lock, whether the read-write lock can improve the performance depends on the frequency of read-write data, the duration of read and write operations, and the competition between the read and write threads.

② principle of mutual exclusion:

  • Reading and reading can coexist.
  • Read write cannot coexist.
  • Write write cannot coexist.

③ReentrantReadWriteLock

ReentrantReadWriteLock is the implementation class of ReadWriteLock

This lock allows the read and write threads to reacquire the read and write locks in ReentrantLock syntax. Non reentrant read threads are not allowed until all write locks held by the write thread are released.

In addition, the write lock (write thread) can acquire the read lock, but the read lock (read thread) is not allowed to acquire the write lock. In other applications, reentrancy can be useful when keeping a write lock during a method or callback that performs a read under a read lock.

The example code is as follows:

public class TestReadWriteLock {

    public static void main(String[] args){
        final ReadWriteLockDemo rwd = new ReadWriteLockDemo();
        //Start 100 read threads
        for (int i = 0; i < 100; i++) {
            new Thread(new Runnable() {
                @Override
                public void run() {
                    rwd.get();
                }
            }).start();
        }
        //Writing thread
        new Thread(new Runnable() {
            @Override
            public void run() {
                rwd.set((int)(Math.random()*101));
            }
        },"Write").start();
    }
}

class ReadWriteLockDemo{
    //Analog shared resource -- Number
    private int number = 0;
    // Actual implementation class -- ReentrantReadWriteLock, default unfair mode
    private ReadWriteLock readWriteLock = new ReentrantReadWriteLock();

    //read
    public void get(){
        //Using read lock
        readWriteLock.readLock().lock();
        try {
            System.out.println(Thread.currentThread().getName()+" : "+number);
        }finally {
            readWriteLock.readLock().unlock();
        }
    }
    //write
    public void set(int number){
        readWriteLock.writeLock().lock();
        try {
            this.number = number;
            System.out.println(Thread.currentThread().getName()+" : "+number);
        }finally {
            readWriteLock.writeLock().unlock();
        }
    }
}
/**
Thread-50 : 0
Thread-19 : 0
Thread-54 : 0
Thread-57 : 0
Thread-31 : 0
Write : 40
Thread-61 : 40
Thread-62 : 40
Thread-35 : 40
Thread-32 : 40
    
*/

First, start the read thread, and the number is 0; then the write thread modifies the number data of the shared resource at some time, and the read thread reads the latest value again.

II. AQS in-depth analysis

What is AQS

AQS is the abbreviation of AbustactQueuedSynchronizer. It is a Java provided underlying synchronization tool class. It uses an int type variable to represent the synchronization state and provides a series of CAS operations to manage the synchronization state. The main function of AQS is to provide unified underlying support for concurrent synchronization components in Java. For example, ReentrantLock and CountdowLatch are implemented based on AQS. The usage of AQS is to implement its template method by inheriting AQS, and then take subclasses as internal classes of synchronization components.

[2] synchronization queue

Synchronization queue is an important part of AQS. It is a two terminal queue. It follows the FIFO principle. Its main function is to store the blocked threads on the lock. When a thread attempts to acquire the lock, if it has been occupied, the current thread will be constructed as a Node node to join the end of the synchronization queue. The head Node of the queue is the Node that successfully acquires the lock. It is the head Node. When the Node thread releases the lock, it wakes up the subsequent Node and releases the reference of the current header Node.

[3] state

AbstractQueuedSynchronizer maintains a variable of type volatile int, and the user represents the current synchronization state. Although volatile does not guarantee the atomicity of the operation, it ensures the visibility of the current variable state. As for the specific semantics of volatile, please refer to my relevant articles. There are three ways to access state:

  • getState()
  • setState()
  • compareAndSetState()

These three operations are called atomic operations, and the implementation of compareAndSetState depends on Unsafe's compareAndSwapInt() method. The code implementation is as follows:

 /**
     * The synchronization state.
     */
    private volatile int state;
  
    /**
     * Returns the current value of synchronization state.
     * This operation has memory semantics of a {@code volatile} read.
     * @return current state value
     */
    protected final int getState() {
        return state;
    }

    /**
     * Sets the value of synchronization state.
     * This operation has memory semantics of a {@code volatile} write.
     * @param newState the new state value
     */
    protected final void setState(int newState) {
        state = newState;
    }

    /**
     * Atomically sets synchronization state to the given updated
     * value if the current state value equals the expected value.
     * This operation has memory semantics of a {@code volatile} read
     * and write.
     *
     * @param expect the expected value
     * @param update the new value
     * @return {@code true} if successful. False return indicates that the actual
     *         value was not equal to the expected value.
     */
    protected final boolean compareAndSetState(int expect, int update) {
        // See below for intrinsics setup to support this
        return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
    }

[4] resource sharing

AQS defines two resource sharing methods: Exclusive (Exclusive, only one thread can execute, such as ReentrantLock) and Share (sharing, multiple threads can execute at the same time, such as Semaphore/CountDownLatch).
   different custom synchronizers compete for shared resources in different ways. The user-defined synchronizer only needs to realize the acquisition and release mode of the shared resource state when it is implemented. As for the maintenance of the waiting queue of specific threads (such as failed to acquire resources to enter the queue or wake up to leave the queue), AQS has been implemented at the top level. When implementing the custom synchronizer, the following methods are mainly implemented:

  • Isheldexclusive(): whether the thread is exclusive of resources. Only condition is needed to implement it.
  • tryAcquire(int): exclusive mode. When trying to get the resource, it returns true if it succeeds, and false if it fails.
  • tryRelease(int): exclusive. When trying to release a resource, it returns true if it succeeds, and false if it fails.
  • tryAcquireShared(int): sharing mode. Try to get resources. A negative number indicates failure; a 0 indicates success, but there are no remaining resources available; a positive number indicates success, and there are remaining resources.
  • Tryreleased (int): shared by. Try to release the resource. If it is allowed to wake up after releasing and wait for the node to return true, otherwise return false.

[5] process of obtaining lock and releasing lock

The following explains the process of acquiring and releasing locks in AQS based on exclusive locks:

Obtain:

  1. Call tryAcquire() of the user-defined synchronizer to try to get resources directly. If it succeeds, it will return directly.
  2. If it fails, addWaiter() will add the thread to the end of the waiting queue and mark it as exclusive mode;
  3. Acquirequeueueued() causes the thread to rest in the waiting queue, and when it has an opportunity (it's its turn, it will be unpark()), it will try to get resources. Get the resource before returning. If it is interrupted in the whole waiting process, it returns true, otherwise it returns false.
  4. If a thread is interrupted while waiting, it does not respond. Self interrupt() is used only after the resource is obtained to make up for the interrupt.

Source code analysis:

Acquire is an exclusive way to acquire resources. If the resource is acquired, the thread will return directly. Otherwise, it will enter the waiting queue until the resource is acquired, and the whole process ignores the impact of interruption. This method is the top-level entry for a thread to obtain shared resources in exclusive mode. Once the resource is obtained, the thread can execute its critical area code. Here is the source code of acquire():

public final void acquire(int arg) {
    if (!tryAcquire(arg) &&
        acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
        selfInterrupt();
}

tryAcquire attempts to acquire resources exclusively. If the acquisition succeeds, it directly returns true. Otherwise, it directly returns false. This method can be used to implement the tryLock() method in Lock. The default implementation of this method is to throw unsupported operation exception, which is implemented by a user-defined synchronization class extending AQS. AQS is only responsible for defining a common method framework here. The reason why it is not defined as abstract is that only tryAcquire tryrelease is implemented in exclusive mode, and only tryacquireshared tryreleased is implemented in shared mode. If all of them are defined as abstract, then each mode must implement the interface under another mode.

protected boolean tryAcquire(int arg) {
    throw new UnsupportedOperationException();
}

                             If the queue is not empty, the current thread node is added to the end of the waiting queue by CAS through the compareAndSetTail method. Otherwise, a waiting queue is initialized through the enq(node) method and returned to the current node. The source code is as follows:

private Node addWaiter(Node mode) {
    Node node = new Node(Thread.currentThread(), mode);
    // Try the fast path of enq; backup to full enq on failure
    Node pred = tail;
    if (pred != null) {
        node.prev = pred;
        if (compareAndSetTail(pred, node)) {
            pred.next = node;
            return node;
        }
    }
    enq(node);
    return node;
}

acquireQueued is used by the threads in the queue to obtain the synchronization state (acquire) in an exclusive and non interruptible way until the lock is obtained. The implementation of this method is divided into two parts: if the current node has become the head node, try to acquire the lock successfully, and then return; otherwise, check whether the current node should be Park, then park the thread and check whether the current thread can be interrupted.

final boolean acquireQueued(final Node node, int arg) {
    //Mark whether the resource is obtained successfully. The default is false.
    boolean failed = true;
    try {
        boolean interrupted = false;//Mark whether the waiting process has been interrupted
        for (;;) {
            final Node p = node.predecessor();
            if (p == head && tryAcquire(arg)) {
                setHead(node);
                p.next = null; // help GC
                failed = false;
                return interrupted;
            }
            if (shouldParkAfterFailedAcquire(p, node) &&
                parkAndCheckInterrupt())
                interrupted = true;
        }
    } finally {
        if (failed)
            cancelAcquire(node);
    }
}

Release:

The    release method is the top-level entry for threads to release shared resources in exclusive mode. It will release the specified amount of resources. If it is completely released (i.e. state=0), it will wake up other threads in the waiting queue to obtain resources. This is exactly the meaning of unlock (), which is not limited to unlock(). Here is the source code of release():

public final boolean release(int arg) {
    if (tryRelease(arg)) {
        Node h = head;
        if (h != null && h.waitStatus != 0)
            unparkSuccessor(h);
        return true;
    }
    return false;
}

    /**
     * Attempts to set the state to reflect a release in exclusive
     * mode.
     *
     * <p>This method is always invoked by the thread performing release.
     *
     * <p>The default implementation throws
     * {@link UnsupportedOperationException}.
     *
     * @param arg the release argument. This value is always the one
     *        passed to a release method, or the current state value upon
     *        entry to a condition wait.  The value is otherwise
     *        uninterpreted and can represent anything you like.
     * @return {@code true} if this object is now in a fully released
     *         state, so that any waiting threads may attempt to acquire;
     *         and {@code false} otherwise.
     * @throws IllegalMonitorStateException if releasing would place this
     *         synchronizer in an illegal state. This exception must be
     *         thrown in a consistent fashion for synchronization to work
     *         correctly.
     * @throws UnsupportedOperationException if exclusive mode is not supported
     */
protected boolean tryRelease(int arg) {
    throw new UnsupportedOperationException();
}

/**
     * Wakes up node's successor, if one exists.
     *
     * @param node the node
     */
private void unparkSuccessor(Node node) {
    /*
         * If status is negative (i.e., possibly needing signal) try
         * to clear in anticipation of signalling.  It is OK if this
         * fails or if status is changed by waiting thread.
         */
    int ws = node.waitStatus;
    if (ws < 0)
        compareAndSetWaitStatus(node, ws, 0);

    /*
         * Thread to unpark is held in successor, which is normally
         * just the next node.  But if cancelled or apparently null,
         * traverse backwards from tail to find the actual
         * non-cancelled successor.
         */
    Node s = node.next;
    if (s == null || s.waitStatus > 0) {
        s = null;
        for (Node t = tail; t != null && t != node; t = t.prev)
            if (t.waitStatus <= 0)
                s = t;
    }
    if (s != null)
        LockSupport.unpark(s.thread);
}

Similar to tryAcquire() in the acquire() method, the tryRelease() method also needs a custom synchronizer of exclusive mode to implement. Normally, tryRelease() will succeed, because this is exclusive mode. If this thread releases resources, it must have obtained exclusive resources. It can directly reduce the corresponding amount of resources (state-=arg), and it does not need to consider thread safety. But pay attention to its return value. As mentioned above, release() judges whether the thread has finished releasing resources according to the return value of tryRelease(). So when the self defined synchronizer is implemented, if the resource has been completely released (state=0), it should return true, otherwise it will return false.
The unparkSuccessor(Node) method is used to wake up the next thread in the waiting queue. Note here that the next thread is not necessarily the next node of the current node, but the next thread that can be used to wake up. If this node exists, call the unpark() method to wake up.
In a word, release() is the top-level entry for threads to release shared resources in exclusive mode. It will release the specified amount of resources. If it is completely released (i.e. state=0), it will wake up other threads in the waiting queue to obtain resources.

3. Custom exclusive lock

public class Mutex implements Lock {
    // Static inner class, custom synchronizer
    private static class Sync extends AbstractQueuedSynchronizer {
        // Occupied or not
        protected boolean isHeldExclusively() {
            return getState() == 1;
        }

        // Acquire lock when status is 0
        public boolean tryAcquire(int acquires) {
            if (compareAndSetState(0, 1)) {
                setExclusiveOwnerThread(Thread.currentThread());
                return true;
            }
            return false;
        }

        // Release lock, set status to 0
        protected boolean tryRelease(int releases) {
            if (getState() == 0) throw new
                    IllegalMonitorStateException();
            setExclusiveOwnerThread(null);
            setState(0);
            return true;
        }

        // Returns a condition, each of which contains a condition queue
        Condition newCondition() {
            return new ConditionObject();
        }
    }

    // Just proxy the operation to Sync
    private final Sync sync = new Sync();

    public void lock() {
        sync.acquire(1);
    }

    public boolean tryLock() {
        return sync.tryAcquire(1);
    }

    public void unlock() {
        sync.release(1);
    }

    public Condition newCondition() {
        return sync.newCondition();
    }

    public boolean isLocked() {
        return sync.isHeldExclusively();
    }

    public boolean hasQueuedThreads() {
        return sync.hasQueuedThreads();
    }

    public void lockInterruptibly() throws InterruptedException {
        sync.acquireInterruptibly(1);
    }

    public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
        return sync.tryAcquireNanos(1, unit.toNanos(timeout));
    }
}

Posted by duhhh33 on Thu, 17 Oct 2019 15:28:40 -0700