Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of ReentrantLock Source Code of Java concurrency

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces the Java concurrent ReentrantLock source code example analysis, the article introduces in great detail, has a certain reference value, interested friends must read!

Before Java5.0, the only mechanisms that could be used to coordinate access to shared objects were synchronized and volatile. We know that the synchronized keyword implements built-in locks, while the volatile keyword ensures multithreaded memory visibility. In most cases, these mechanisms can do the work well, but can not achieve some more advanced functions, such as unable to interrupt a thread waiting to acquire a lock, unable to implement a time-limited lock acquisition mechanism, unable to implement locking rules for non-blocking structures, and so on. These more flexible locking mechanisms usually provide better activity or performance. Therefore, a new mechanism has been added to Java5.0: ReentrantLock. The ReentrantLock class implements the Lock interface and provides the same mutex and memory visibility as synchronized, whose underlying layer implements multithread synchronization through AQS. ReentrantLock not only provides a richer locking mechanism than built-in locks, but also performs as well as built-in locks (even better than built-in locks in previous versions). Having said so many advantages of ReentrantLock, let's uncover its source code and take a look at its implementation.

Introduction of 1.synchronized keyword

Java provides built-in locks to support multi-thread synchronization. JVM identifies the synchronization code block according to the synchronized keyword, automatically acquires the lock when the thread enters the synchronization code block, releases the lock automatically when exiting the synchronization code block, and blocks other threads after one thread acquires the lock. Each Java object can be used as a lock to achieve synchronization, and the synchronized keyword can be used to modify object methods, static methods and code blocks. When decorating object methods and static methods, the lock is the object where the method is located and the Class object, respectively. When decorating the code block, you need to provide additional objects as locks. Each Java object can be used as a lock because there is a monitor object (pipe) associated in the object header, the thread will automatically hold the monitor object when it enters the synchronous code block, and the monitor object will be automatically released when exiting. When the monitor object is held, other threads will be blocked. Of course, these synchronization operations are implemented by the bottom layer of JVM, but there are some differences between the methods modified by the synchronized keyword and the code blocks in the underlying implementation. The method of synchronized keyword modification is implicitly synchronized, that is, it does not need to be controlled by bytecode instructions, JVM can distinguish whether a method is a synchronous method according to the ACC_SYNCHRONIZED access flag in the method table, while the code block modified by synchronized keyword is explicitly synchronized, which controls the thread's holding and release of the pipe through monitorenter and monitorexit bytecode instructions. The _ count field is held inside the monitor object. If _ count is equal to 0, the pipe is not held, and if _ count is greater than 0, the pipe is held. Every time the holding thread reenters, _ count will be increased by 1, and each time the holding thread exits, the _ count will be reduced by 1. This is how the built-in lock reentrant is implemented. In addition, there are two queues _ EntryList and _ WaitSet inside the monitor object, corresponding to the synchronization queue and conditional queue of AQS. When a thread fails to acquire a lock, it will block in _ EntryList. When calling the wait method of the lock object, the thread will enter _ WaitSet to wait. This is the implementation principle of thread synchronization and conditional waiting of built-in lock.

Comparison between 2.ReentrantLock and Synchronized

The synchronized keyword is a built-in locking mechanism provided by Java, and its synchronization operation is implemented by the underlying JVM, while ReentrantLock is an explicit lock provided by the java.util.concurrent package, and its synchronization operation is supported by the AQS synchronizer. ReentrantLock provides the same semantics on locking and memory as built-in locks, and it also provides some other features, including scheduled lock waiting, interruptible lock waiting, fair locking, and locking of non-block structures. In addition, ReentrantLock has some performance advantages in early versions of JDK, so why use the synchronized keyword when ReentrantLock has so many advantages? In fact, many people do use ReentrantLock instead of locking the synchronized keyword. However, the built-in lock still has its unique advantages. The built-in lock is familiar to many developers and is used in a more concise and compact way, because the explicit lock must manually call unlock in the finally block, so it is relatively safer to use the built-in lock. At the same time, it is more likely to improve the performance of synchronized than ReentrantLock in the future. Because synchronized is a built-in property of JVM, it can perform some optimizations, such as lock elimination optimization for thread-closed lock objects, and eliminate built-in lock synchronization by increasing lock granularity, which is unlikely if these functions are achieved through class library-based locks. Therefore, ReentrantLock should be used only when some advanced functions are needed, such as timed, polled and interruptible lock acquisition operations, fair queuing, and non-block locks. Otherwise, synchronized should be preferred.

3. The operation of acquiring and releasing the lock

Let's first look at the sample code that uses ReentrantLock locking.

Public void doSomething () {/ / defaults to acquire an unfair lock ReentrantLock lock = new ReentrantLock (); try {/ / lock lock.lock () before execution; / / perform operation.} finally {/ / Last release lock lock.unlock ();}}

The following is the API for both acquiring and releasing the lock.

/ / the operation of public void lock () {sync.lock ();} / / the operation of releasing lock public void unlock () {sync.release (1);}

You can see that the operation of acquiring and releasing the lock is delegated to the lock and release methods of the Sync object, respectively.

Public class ReentrantLock implements Lock, java.io.Serializable {private final Sync sync; abstract static class Sync extends AbstractQueuedSynchronizer {abstract void lock ();} / implement the synchronizer static final class NonfairSync extends Sync {final void lock () {...}} / / implement the synchronizer static final class FairSync extends Sync {final void lock () {...} of the fair lock

Each ReentrantLock object holds a reference to the Sync type. The Sync class is an abstract inner class that inherits from AbstractQueuedSynchronizer, and the lock method in it is an abstract method. The member variable sync of ReentrantLock is assigned at construction time. Let's take a look at what the two constructors of ReentrantLock do.

/ / default nonparametric constructor public ReentrantLock () {sync = new NonfairSync ();} / / parametric constructor public ReentrantLock (boolean fair) {sync = fair? New FairSync (): new NonfairSync ();}

Calling the default no-parameter constructor assigns the NonfairSync instance to sync, where the lock is an unfair lock. The parameterized constructor allows you to specify whether to assign a FairSync instance or a NonfairSync instance to sync by parameters. Both NonfairSync and FairSync inherit from the Sync class and override the lock () method, so there are some differences between fair and unfair locks in the way they are acquired, which we'll talk about below. Let's take a look at the lock release operation. Each call to the unlock () method only performs the sync.release (1) operation, which calls the release () method of the AbstractQueuedSynchronizer class. Let's review it again.

/ / release lock operation (exclusive mode) public final boolean release (int arg) {/ / switch the password lock to see if you can unlock if (tryRelease (arg)) {/ / get the head node Node h = head / / if the head node is not empty and the wait state is not equal to 0, wake up the successor node if (h! = null & & h.waitStatus! = 0) {/ / wake up the successor node unparkSuccessor (h);} return true;} return false;}

This release method is the API provided by AQS to release the lock. It will first call the tryRelease method to try to acquire the lock. The tryRelease method is an abstract method, and its implementation logic is in the subclass Sync.

/ / attempt to release the lock protected final boolean tryRelease (int releases) {int c = getState ()-releases; / / if the thread holding the lock is not the current thread, throw an exception if (Thread.currentThread ()! = getExclusiveOwnerThread ()) {throw new IllegalMonitorStateException ();} boolean free = false; / / if the synchronization status is 0, the lock is released if (c = = 0) {/ / set the lock to be released as true free = true / / set the occupied thread to be empty setExclusiveOwnerThread (null);} setState (c); return free;}

The tryRelease method first gets the current synchronization state, subtracts the current synchronization state from the passed parameter values to get a new synchronization state, and then determines whether the new synchronization state is equal to 0, and if it is equal to 0, the current lock is released, then the lock release state is set to true, and then the thread that currently occupies the lock is emptied. Finally, the setState method is called to set the new synchronization state and return the lock release state.

4. Fair lock and unfair lock

We know whether ReentrantLock is a fair lock or an unfair lock based on which specific instance sync points to. The member variable sync is assigned a value during construction. If it is assigned to a NonfairSync instance, it is an unfair lock, and if it is an FairSync instance, it is a fair lock. If it is a fair lock, the thread will acquire the lock in the order in which they made requests, but on unfair locks, queue jumping is allowed: when a thread requests an unfair lock, if the lock's state becomes available at the same time as the request is made, the thread will skip all waiting threads in the queue and acquire the lock directly. Let's first take a look at how to acquire unfair locks.

/ / unfair synchronizer static final class NonfairSync extends Sync {/ / implements the parent class's abstract lock acquisition method final void lock () {/ / sets the synchronization status if (compareAndSetState (0,1)) in CAS mode {/ / indicates that the lock is not occupied by setExclusiveOwnerThread (Thread.currentThread ()) if the setting is successful } else {/ / otherwise indicates that the lock has been occupied. Call acquire to queue the thread to synchronize the queue to obtain acquire (1);}} / / try to acquire the lock method protected final boolean tryAcquire (int acquires) {return nonfairTryAcquire (acquires) }} / / acquire locks (exclusive mode) public final void acquire (int arg) {if (! tryAcquire (arg) & & acquireQueued (addWaiter (Node.EXCLUSIVE), arg)) {selfInterrupt ();}}

You can see that in the lock method of the unfair lock, the thread first changes the value of the synchronization state from 0 to 1 in CAS. In fact, this step is tantamount to trying to acquire the lock, and if the change is successful, it means that the thread has acquired the lock as soon as it comes, instead of queuing up in the synchronization queue. If the change fails, the lock has not been released when the thread first arrives, so the acquire method is then called. We know that this acquire method is inherited from the AbstractQueuedSynchronizer method, now let's review this method. After entering the acquire method, the thread first calls the tryAcquire method to try to acquire the lock. Because NonfairSync overrides the tryAcquire method and calls the nonfairTryAcquire method of the parent class Sync in the method, the nonfairTryAcquire method will be called to try to acquire the lock. Let's take a look at exactly what this method does.

/ / unfair acquisition lock final boolean nonfairTryAcquire (int acquires) {/ / get the current thread final Thread current = Thread.currentThread (); / / get the current synchronization status int c = getState () / / if the synchronization status is 0, the lock is not occupied if (c = = 0) {/ / use CAS to update the synchronization status if (compareAndSetState (0, acquires)) {/ / set the thread setExclusiveOwnerThread (current) that currently occupies the lock; return true } / / otherwise, determine whether the lock is held by the current thread} else if (current = = getExclusiveOwnerThread ()) {/ / if the lock is held by the current thread, directly modify the current synchronization state int nextc = c + acquires; if (nextc < 0) {throw new Error ("Maximum lock count exceeded");} setState (nextc); return true } / / return the failure flag return false;} if the lock is not held by the current thread

The nonfairTryAcquire method is the method of Sync. We can see that after entering this method, the thread first goes to get the synchronization state. If the synchronization status is 0, use the CAS operation to change the synchronization state. In fact, this is to acquire the lock again. If the synchronization status is not 0 indicates that the lock is occupied, it will first determine whether the thread holding the lock is the current thread, and if so, increase the synchronization state by 1, otherwise the attempt to acquire the lock will fail. The addWaiter method is called to add the thread to the synchronization queue. To sum up, in the unfair lock mode, a thread will try to acquire the lock twice before entering the synchronization queue, and if it is successful, it will not enter the synchronization queue, otherwise it will enter the synchronization queue. Next, let's take a look at how to obtain fair locks.

/ / synchronizer static final class FairSync extends Sync {/ / implements the parent class's abstract lock acquisition method final void lock () {/ / calls acquire to queue threads to synchronize the queue to acquire acquire (1);} / / attempts to acquire the lock method protected final boolean tryAcquire (int acquires) {/ / get the current thread final Thread current = Thread.currentThread () / / get the current synchronization status int c = getState (); / / if the synchronization status is 0, the lock is not occupied (c = = 0) {/ / determine whether the synchronization queue has a previous node if (! hasQueuedPredecessors () & & compareAndSetState (0, acquires)) {/ / if there is no previous node and the synchronization status is set successfully, the lock is acquired successfully setExclusiveOwnerThread (current) Return true;} / / otherwise determine whether the current thread holds the lock} else if (current = = getExclusiveOwnerThread ()) {/ / if the current thread holds the lock, modify the synchronization status directly int nextc = c + acquires; if (nextc < 0) {throw new Error ("Maximum lock count exceeded");} setState (nextc); return true } / / if the current thread does not hold the lock, obtain the failed return false;}}

The acquire method is called directly when the lock method of the fair lock is called. Similarly, the acquire method first calls the tryAcquire method overridden by FairSync to try to acquire the lock. All operations are the same except that this step is different from the unfair lock.

So why don't we want all locks to be fair? After all, fairness is a good behavior, while unfairness is a bad behavior. Due to the high overhead of thread suspension and wake-up operations, fair locks will lead to frequent thread suspension and wake-up operations, especially in the case of fierce competition, while unfair locks can reduce such operations. so it will be better than fair locks in performance. In addition, because the time for most threads to use the lock is very short, and there is a delay in the wake-up operation of the thread, it is possible that thread B immediately acquires the lock and releases the lock when thread An is awakened. This leads to a win-win situation. The time for thread A to acquire the lock is not delayed, but thread B uses the lock in advance, and the throughput is improved.

5. Implementation Mechanism of conditional queue

There are some defects in the built-in conditional queue, each built-in lock can only have one associated conditional queue, which causes multiple threads to wait for different conditional predicates on the same conditional queue, so every time notifyAll is called, all waiting threads will be awakened. When the thread wakes up, it will find that it is not the conditional predicate that it is waiting for, but will be suspended again. This leads to a lot of useless thread wake-up and suspend operations, which will waste a lot of system resources and reduce system performance. If you want to write a concurrent object with multiple conditional predicates, or if you want to gain more control than conditional queue visibility, you need to use explicit Lock and Condition instead of built-in locks and conditional queues. A Condition is associated with a Lock, just as a conditional queue is associated with a built-in lock. To create a Condition, you can call the Lock.newCondition method on the associated Lock. Let's first look at an example of using Condition.

Public class BoundedBuffer {final Lock lock = new ReentrantLock (); final Condition notFull = lock.newCondition (); / / conditional predicate: notFull final Condition notEmpty = lock.newCondition (); / / conditional predicate: notEmpty final Object [] items = new Object; int putptr, takeptr, count; / / production method public void put (Object x) throws InterruptedException {lock.lock (); try {while (count = items.length) notFull.await () / / the queue is full and the thread waits on the notFull queue for items [putptr] = x; if (+ + putptr = = items.length) putptr = 0; + + count; notEmpty.signal (); / / Wake up the notEmpty queue node} finally {lock.unlock ();}} / / consumption method public Object take () throws InterruptedException {lock.lock () Try {while (count = = 0) notEmpty.await (); / / queue is empty, thread waits on notEmpty queue for Object x = items [takeptr]; if (+ + takeptr = = items.length) takeptr = 0;-- count; notFull.signal (); / / consumption succeeds, wake up node return x of notFull queue;} finally {lock.unlock ();}

A lock object can generate multiple conditional queues, where two conditional queues, notFull and notEmpty, are generated. Threads that call the put method when the container is full need to block and wait for the condition predicate to be true (container is dissatisfied) before waking up to continue execution. Threads that call the take method when the container is empty also need to block and wait for the condition predicate to be true (the container is not empty) before waking up to continue execution. These two types of threads wait based on different conditional predicates, so they block in two different conditional queues and wake up by calling API on the Condition object when the time is right. The following is the implementation code for the newCondition method.

/ / create conditional queue public Condition newCondition () {return sync.newCondition ();} abstract static class Sync extends AbstractQueuedSynchronizer {/ / New Condition object final ConditionObject newCondition () {return new ConditionObject ();}}

The implementation of conditional queues on ReentrantLock is based on AbstractQueuedSynchronizer, and the Condition object we get when we call the newCondition method is an instance of the inner class ConditionObject of AQS. All operations on conditional queues are done by calling API provided by ConditionObject. You can refer to my article "Java concurrency Series [4]-conditional queue for AbstractQueuedSynchronizer source code analysis" about the specific implementation of ConditionObject, so I won't repeat it here. At this point, our analysis of the ReentrantLock source code has also come to an end. We hope that reading this article can help readers to understand and master ReentrantLock.

The above is all the content of this article "sample Analysis of ReentrantLock Source Code of Java concurrency". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report