Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the reasons for recommending ReentrantLock instead of Synchronized when dynamic concurrency is high?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "what are the reasons for recommending ReentrantLock instead of Synchronized in dynamic high concurrency". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Antecedent words

Synchronized and ReentrantLock should be familiar to everyone. As the most commonly used local lock in java, the performance of ReentrantLock in the initial version is far better than that of Synchronized. Later, java optimizes Synchronized in a number of iterations until after jdk1.6, the performance of the two locks is almost the same, and even the automatic release lock of Synchronized will be more useful.

When asked about the choice of Synchronized and ReentrantLock during the interview, many friends blurted out that they would use Synchronized. Even when I asked the interviewer during the interview, few people were able to answer. Moon wanted to say, "this is not necessarily. Students who are only interested in the title can draw it directly to the end. I am not the title party."

Synchronized usage

The use of synchronized in java code is very simple.

1. Paste it directly on the method.

two。 Stick it on the code block.

What happened to the Synchronized code while the program was running?

Let's look at a picture.

In the process of multithreading, the thread will grab the monitor of the object first. This monitor is unique to the object and is actually equivalent to a key. If you get it, you will get the right to execute the current code block.

Other threads that have not been grabbed will wait in the SynchronizedQueue and release the lock after the current thread finishes execution.

Finally, the current thread is notified to leave the team after execution and then continues to repeat the current process.

From a jvm perspective, the monitorenter and monitorexit instructions represent the execution and end of the code.

SynchronizedQueue:

SynchronizedQueue is a special queue, it has no storage function, its function is to maintain a set of threads, in which each insert operation must wait for another thread to remove operation, and any remove operation will wait for another thread to insert operation. Therefore, there is not a single element inside the queue, or the capacity is 0, which is not strictly a container. Because the queue has no capacity, the peek operation cannot be called because there are elements only when the element is removed.

For example:

When drinking, first pour the wine into the wine cup, and then into the wine glass, this is the normal queue.

When drinking, pour the wine directly into the glass, this is SynchronizedQueue.

This example should be clear and easy to understand, and its advantage is that it can be delivered directly, eliminating the process of third-party transmission.

Talk about the details, the process of lock upgrade.

Before jdk1.6, was Synchronized a heavyweight lock, or did you post a picture first?

This is why Synchronized is a heavyweight lock, because the resources of each lock are applied directly with cpu, while the number of locks in cpu is fixed, and locks wait when the cpu lock resources are used, which is a very time-consuming operation.

But in jdk1.6, there are a lot of optimizations at the code level, which is what we often call the lock upgrade process.

This is a lock upgrade process, let's talk about it briefly:

Unlocked: the object is unlocked from the beginning.

Biased lock: it is equivalent to putting a label on the object (putting your own thread id in the object header). The next time I come in and find that the tag is mine, I can continue to use it.

Spin lock: imagine a toilet with a person in it, and you want to go to it but there is only one pit, so you have to wander and wait until that person comes out and you can use it. This spin uses cas to guarantee atomicity, so I'm not going to dwell on cas here.

Heavyweight lock: apply for the lock directly from cpu, and other threads wait in the queue.

When did the lock upgrade happen?

Biased lock: when a thread acquires a lock, it will upgrade from unlocked to biased lock.

Spin lock: upgrade from bias lock to spin lock when thread contention occurs. Imagine while (true)

Heavy lock: when thread competition reaches a certain number or exceeds a certain amount of time, it is promoted to a heavy lock.

Where is the lock information recorded?

This diagram shows the data structure of markword in the object header, where the lock information is stored, which clearly shows that the lock information changes when the lock is upgraded. In fact, the object is marked by binary values, and each value represents a state.

Since synchronized has lock upgrade, is there lock downgrade?

This problem has a lot to do with our problem.

There is lock degradation in the HotSpot virtual machine, but only in the case of STW, and only the garbage collection thread can observe it, that is, lock degradation does not occur in our normal use, but only in GC.

So do you understand the answer to the question? Haha, let's keep going.

The use of ReentrantLock

The use of ReentrantLock is also very simple, unlike Synchronized is that you need to manually release the lock, in order to ensure a certain release, so it is usually used in conjunction with try~finally.

The principle of ReentrantLock

ReentrantLock means reentrant lock, so when it comes to ReentrantLock, we have to say AQS, because the underlying layer is implemented using AQS.

There are two modes of ReentrantLock, one is fair lock and the other is unfair lock.

In fair mode, the waiting thread will be executed in strict queue order after it is queued.

Queue jumping may occur after waiting for threads to queue in unfair mode.

This is the structure diagram of ReentrantLock, we see that this diagram is actually very simple, because the main implementation is left to AQS to do, let's focus on AQS.

AQS

AQS (AbstractQueuedSynchronizer): AQS can be understood as a framework that implements locks.

Simple process understanding:

Fair Lock:

Step 1: get the value of the state's state.

If state=0 means that the lock is not occupied by another thread, perform the second step.

If statekeeper zero indicates that the lock is being occupied by another thread, perform the third step.

Step 2: determine whether there are threads waiting in the queue.

If it does not exist, the owner of the lock is directly set to the current thread, and the status state is updated.

Join the team if it exists.

Step 3: determine whether the owner of the lock is the current thread.

If so, update the value of the status state.

If not, the thread enters the queue and waits.

Unfair lock:

Step 1: get the value of the state's state.

If state=0 means that the lock is not occupied by another thread, the holder of the current lock is set to the current thread, and the operation is completed by CAS.

If it is not 0 or the setting fails, it means the lock is occupied to proceed to the next step.

Get the value of state at this time

If so, give it to state+1 to acquire the lock.

If not, enter the queue and wait

If it is 0, it means that the thread has just released the lock, and the holder of the lock is set to himself.

If it is not 0, check to see if the thread owner is himself.

After reading the above section, I believe you already have a clear concept of AQS, so let's talk about the small details.

AQS uses the state synchronization state (0 for unlocked, 1 for have) and exposes getState, setState, and compareAndSet operations to read and update this state, so that it is set to a new value only if the synchronization state has an expected value.

When a thread fails to acquire the lock, the AQS manages the synchronization status through a two-way synchronization queue and is added to the end of the queue.

This is the code that defines the head and tail nodes. What we can decorate with volatile is to make sure that other threads are visible. AQS actually modifies the head and tail nodes to complete the queuing and dequeuing operations.

When AQS acquires a lock, not only one thread can hold the lock, so there is a difference between exclusive mode and shared mode. The ReentrantLock in this article uses exclusive mode. In the case of multithreading, only one thread will acquire the lock.

The process of exclusive mode is relatively simple, according to whether the state is 0 or not, we can determine whether a thread has acquired the lock, block it if not, and continue to execute the subsequent code logic.

The shared mode process determines whether a thread has acquired the lock according to whether the state is greater than 0, blocks if it is not greater than 0, subtracts the value of state through the atomic operation of CAS, and then continues to execute the subsequent code logic.

The difference between ReentrantLock and Synchronized

In fact, the core difference between ReentrantLock and Synchronized is that Synchronized is suitable for situations with low concurrency competition, because if the lock upgrade of Synchronized is eventually upgraded to a heavyweight lock, there is no way to eliminate it, which means that you have to request lock resources with cpu every time, while ReentrantLock mainly provides blocking ability to reduce competition and improve concurrency by suspending threads under high concurrency, so the answer in the title of our article It's obvious.

Synchronized is a keyword that is implemented at the jvm level, while ReentrantLock is implemented by java api.

Synchronized is an implicit lock, which releases the lock automatically, and ReentrantLock is an explicit lock, which requires manual release.

ReentrantLock allows the thread waiting for the lock to respond to the interrupt, but synchronized does not. When using synchronized, the waiting thread waits forever and cannot respond to the interrupt.

ReentrantLock can get the lock state, but synchronized cannot.

Tell me the answer to the title.

In fact, the answer to the question lies in the first item in the previous column, which is also the core difference. Synchronized cannot be degraded under normal circumstances after upgrading to a heavyweight lock, while ReentrantLock improves performance through blocking, which reflects the support for multithreading in the design pattern.

This is the end of the content of "what are the reasons for recommending ReentrantLock instead of Synchronized in dynamic high concurrency?" Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report