Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the classic JVM locks in java

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the knowledge of "what are the classic JVM locks in java". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

The synchronized synchronized keyword is a classic lock, and it is also the one we use most often. Before jdk1.6, syncronized was a heavyweight lock, but with the upgrade of jdk and continuous optimization, it has become less heavy, and even in some scenarios, its performance is better than lightweight locks. In the method and code block with the syncronized keyword added, only one thread is allowed to enter a specific code segment at a time, thus preventing multiple threads from modifying the same data at the same time. The synchronized lock has the following characteristics:

A, lock upgrade process before jdk1.5 (included), the underlying implementation of synchronized is heavyweight, so it has always been called "heavyweight lock". After jdk1.5, synchronized has been optimized, it has become less heavy, and the implementation principle is the process of lock upgrade. Let's first talk about the principle of synchronized implementation after 1.5. When it comes to the principle of synchronized locking, we have to first talk about the layout of java objects in memory. The memory layout of java objects is as follows:

As shown in the figure above, after creating an object, the storage layout of the object in the Java memory in the JVM virtual machine (HotSpot) can be divided into three parts: * * (1) object header area

* * the information stored here consists of two parts:

Object's own run-time data (MarkWord)

Store hashCode, GC generation age, lock type tag, bias lock thread ID, CAS lock pointer to thread LockRecord, etc. The mechanism of synconized lock is closely related to this part (markwork). The state of the lock is represented by the lowest three digits in markword, one of which is biased lock bit, and the other two are ordinary lock bit.

Object type pointer (Class Pointer)

The pointer of the object to its class metadata, through which the JVM determines which instance of Class (2) the instance data area stores the really valid information of the object, such as the contents of all fields in the object.

(3) the implementation of the alignment filled region JVM HostSpot stipulates that the starting address of the object must be an integer multiple of 8 bytes, in other words, now 64-bit OS reads data out of the 64bit integer times, that is, 8 bytes, so HotSpot in order to read objects efficiently, do a "alignment", if the actual memory size of an object is not an integer multiple of 8byte, then "patch" to the integer times of 8byte. So the size of the aligned fill area is not fixed.

When the thread enters the synchronized to try to acquire the lock, the synchronized lock upgrade process is as follows:

As shown in the figure above, the order of synchronized lock upgrade is as follows: bias lock-> lightweight lock-> heavyweight lock. Each step triggers lock upgrade as follows: preferred lock in JDK1.8, in fact, lightweight lock by default, but if-XX:BiasedLockingStartupDelay = 0 is set, a bias lock will be applied immediately when syncronized an Object. When in the biased lock state, markwork will record the upgrade of the current thread ID to lightweight lock. When the next thread participates in the biased lock competition, it will first determine whether the thread ID saved in markword is equal to this thread ID. If not, it will immediately cancel the biased lock and upgrade to lightweight lock. Each thread generates a LockRecord (LR) in its own thread stack, and then each thread sets the markwork in the lock object header to a pointer to its own LR through the CAS (spin) operation. The CAS operation performed at this time in synchronized is realized by calling the C++ code of the bytecodeInterpreter.cpp file in HotSpot by native. If you are interested, you can continue to dig deep and upgrade to a heavyweight lock. If the lock competition intensifies (such as the number of spins of threads or the number of spinning threads exceeds a certain threshold, after JDK1.6, JVM controls and changes the rules), it will upgrade to a heavyweight lock. At this point, it will apply for resources from the operating system, suspend the thread, enter the waiting queue in the kernel state of the operating system, wait for the operating system to be scheduled, and then map back to the user state. In heavy locks, it takes more time to convert kernel state to user state, which is one of the reasons for "heavy".

B. Reentrant synchronized has an internal locking mechanism that forces atomicity, so it is a reentrant lock. Therefore, when a thread uses the synchronized method to call another synchronized method of the object, that is, a thread gets an object lock and requests the object lock again, it can always get the lock. In Java, threads acquire object locks on a thread-by-thread basis, not on a call basis. The thread holder and counter of the synchronized lock are recorded in the markwork of the object header of the lock, and when a thread request succeeds, JVM notes the thread holding the lock and counts the counter as 1. If another thread requests the lock at this point, it must wait. If the thread that holds the lock requests the lock again, it can get the lock again, and the counter will be incremented. When a thread exits a synchronized method / block, the counter is decremented, and if the counter is 0, the lock is released.

C, pessimistic lock (mutex lock, exclusive lock) synchronized is a pessimistic lock (exclusive lock). If the current thread acquires the lock, it will cause all other threads that need the lock to wait until the thread holding the lock releases the lock before continuing the scramble for locks.

ReentrantLock**ReentrantLock can literally see that it is a reentrant lock, which is the same as synchronized, but the implementation principle is also very different from syncronized. It is based on the classical AQS (AbstractQueueSyncronized). AQS is based on volitale and CAS, in which AQS maintains a valitale type variable state to do a reentrant lock number of times, locking and releasing locks are also carried out around this variable. ReentrantLock also provides some features that synchronized does not have, so it is easier to use the AQS model than synchronized, as shown below:

ReentrantLock has the following characteristics: a, reentrant ReentrantLock and syncronized keywords, are reentrant locks, but the two implementation principles are slightly different, RetrantLock uses AQS's state state to determine whether the resource is locked, the same thread reentrant lock, state state + 1; the same thread reentrant unlocked, state state-1 (unlock must be the current exclusive thread, otherwise exception); when the state is 0 unlocked successfully. B. Manual locking and unlocking synchronized keywords are automatically locked and unlocked, while ReentrantLock requires lock () and unlock () methods to complete with try/finally statement blocks to manually lock, unlock c, support the lock timeout synchronized keyword cannot set the lock timeout, if a deadlock occurs within a lock thread, then other threads will always enter the blocking state, while ReentrantLock provides the tryLock method Allows the thread to set the timeout for acquiring the lock. If it times out, it skips and does nothing to avoid deadlock. The synchronized keyword that supports fair / unfair locks is an unfair lock, and the thread that grabs the lock first executes it. The construction method of ReentrantLock allows you to set true/false to achieve fair and unfair locks. If set to true, the thread will follow the rule of "first come, first come". Each time, a thread Node will be constructed, and then queue behind the "tail" of the two-way linked list, waiting for the previous Node to release lock resources. E. The lockInterruptibly () method in the interruptible lock ReentrantLock enables the thread to respond to the interrupt when it is blocked. For example, a thread T1 acquires a reentrant lock through the lockInterruptibly () method and executes a long-term task, while the other thread can immediately interrupt the execution of the T1 thread through the interrupt () method to obtain the reentrant lock held by T1. Threads that hold locks through the lock () method of ReentrantLock or Synchronized do not respond to the interrupt () method of other threads, and do not respond to the interrupt () method until the method actively releases the lock.

ReentrantReadWriteLockReentrantReadWriteLock (read-write lock) is actually two locks, one is WriteLock (write lock), the other is read lock, ReadLock. The rules of read-write lock are: read and write are not mutually exclusive, read and write are mutually exclusive. In some practical scenarios, the frequency of read operations is much higher than that of write operations. if you directly use general locks for concurrency control, there will be read mutual exclusion, read-write mutual exclusion, and write-write mutual exclusion, which is inefficient. The purpose of the read-write lock is to optimize the operational efficiency of this scenario. In general, the low efficiency of exclusive lock comes from the thread context switching caused by the fierce competition for the critical area under high concurrency. Therefore, when concurrency is not very high, read-write locks may not be as efficient as exclusive locks because of the need to maintain the state of read locks. Therefore, we need to choose to use it according to the actual situation. The principle of ReentrantReadWriteLock is also based on AQS, which is different from ReentrantLock in that ReentrantReadWriteLock lock has shared lock and exclusive lock properties. Locking and releasing locks in read-write locks are also based on Sync (inherited from AQS) and are mainly implemented using state in AQS and waitState variables in node. The main difference between the implementation of read-write lock and ordinary mutex is that the status of read lock and write lock need to be recorded respectively, and the two locking operations need to be handled differently in the waiting queue. In ReentrantReadWriteLock, the state of int type in AQS is divided into high 16 bits and 16 bits to record the status of read lock and write lock, respectively, as shown in the following figure:

A, WriteLock (write lock) is a pessimistic lock (exclusive lock, mutex lock) by calculating state& ((1 > > 16) to fill 0 unsigned and move 16 bits to the right, so the high bit of state records the reentrant count of the write lock and the process of acquiring the lock is a little more complicated than the write lock. First, determine whether the write lock is 0 and the current thread does not own the exclusive lock, and return directly. Otherwise, determine whether the reading thread needs to be blocked and whether the number of read locks is less than the maximum value, and compare the setting status successfully. If there is no current read lock, set the first reading thread firstReader and firstReaderHoldCount;. If the current thread is the first reading thread, add firstReaderHoldCount. Otherwise, the value of the HoldCounter object corresponding to the current thread will be set, and after the update is successful, the current thread reentry number will be recorded in the local thread copy of readHolds (type ThreadLocal) in firstReaderHoldCount. This is to implement the getReadHoldCount () method added in jdk1.6, which can obtain the number of times the current thread reenters the shared lock (the total number of times multiple threads are recorded in state), which complicates the code a lot. But the principle is still very simple: if there is only one thread, do not need to use ThreadLocal, directly to firstReaderHoldCount this member variable reentrant, when there is a second thread, it is necessary to use the ThreadLocal variable readHolds, each thread has its own copy, used to save its own reentrant number.

Get the read lock source code:

/ * acquire the read lock * Acquires the read lock. * if the write lock is not held by another thread, perform a CAS operation to update the status value, and return immediately after obtaining the read lock *

Acquires the read lock if the write lock is not held by * another thread and returns immediately. * * if the write lock is held by another thread, stop the CPU scheduling of that thread and go to sleep until the read lock is released *

If the write lock is held by another thread then * the current thread becomes disabled for thread scheduling * purposes and lies dormant until the read lock has been acquired. * / public void lock () {sync.acquireShared (1) } / * this method acquires the read lock in shared mode and ignores interrupts * if the "tryAcquireShared" method updates the status successfully once, it returns directly, which means the preemptive lock is successful * otherwise, it will enter the synchronization queue and wait, and constantly execute the "tryAcquireShared" method to attempt to update the status status by CAS. Until successfully grabbing the lock * where the "tryAcquireShared" method has its own implementation in NonfairSync (fair lock) and FairSync (unfair lock) * (see if this comment is symmetrical with the write lock) * Acquires in shared mode, ignoring interrupts. Implemented by * first invoking at least once {@ link # tryAcquireShared}, * returning on success. Otherwise the thread is queued, possibly * repeatedly blocking and unblocking, invoking {@ link * # tryAcquireShared} until success. * * @ param arg the acquire argument. This value is conveyed to * {@ link # tryAcquireShared} but is otherwise uninterpreted * and can represent anything you like. * / public final void acquireShared (int arg) {if (tryAcquireShared (arg))

< 0) doAcquireShared(arg); } protected final int tryAcquireShared(int unused) { /* * Walkthrough: * 1、如果已经有其他线程获取到了写锁,根据"读写互斥"原则,抢锁失败,返回-1 * 1.If write lock held by another thread, fail. * 2、如果该线程本身持有写锁,那么看一下是否要readerShouldBlock,如果不需要阻塞, * 则执行CAS操作更新state和重入计数。 * 这里要注意的是,上面的步骤不检查是否可重入(因为读锁属于共享锁,天生支持可重入) * 2. Otherwise, this thread is eligible for * lock wrt state, so ask if it should block * because of queue policy. If not, try * to grant by CASing state and updating count. * Note that step does not check for reentrant * acquires, which is postponed to full version * to avoid having to check hold count in * the more typical non-reentrant case. * 3、如果因为CAS更新status失败或者重入计数超过最大值导致步骤2执行失败 * 那就进入到fullTryAcquireShared方法进行死循环,直到抢锁成功 * 3. If step 2 fails either because thread * apparently not eligible or CAS fails or count * saturated, chain to version with full retry loop. */ //当前尝试获取读锁的线程 Thread current = Thread.currentThread(); //获取该读写锁状态 int c = getState(); //如果有线程获取到了写锁 ,且获取写锁的不是当前线程则返回失败 if (exclusiveCount(c) != 0 && getExclusiveOwnerThread() != current) return -1; //获取读锁的重入计数 int r = sharedCount(c); //如果读线程不应该被阻塞,且重入计数小于最大值,且CAS执行读锁重入计数+1成功,则执行线程重入的计数加1操作,返回成功 if (!readerShouldBlock() && r < MAX_COUNT && compareAndSetState(c, c + SHARED_UNIT)) { //如果还未有线程获取到读锁,则将firstReader设置为当前线程,firstReaderHoldCount设置为1 if (r == 0) { firstReader = current; firstReaderHoldCount = 1; } else if (firstReader == current) { //如果firstReader是当前线程,则将firstReader的重入计数变量firstReaderHoldCount加1 firstReaderHoldCount++; } else { //否则说明有至少两个线程共享读锁,获取共享锁重入计数器HoldCounter //从HoldCounter中拿到当前线程的线程变量cachedHoldCounter,将此线程的重入计数count加1 HoldCounter rh = cachedHoldCounter; if (rh == null || rh.tid != getThreadId(current)) cachedHoldCounter = rh = readHolds.get(); else if (rh.count == 0) readHolds.set(rh); rh.count++; } return 1; } //如果上面的if条件有一个都不满足,则进入到这个方法里进行死循环重新获取 return fullTryAcquireShared(current); } /** * 用于处理CAS操作state失败和tryAcquireShared中未执行获取可重入锁动作的full方法(补偿方法?) * Full version of acquire for reads, that handles CAS misses * and reentrant reads not dealt with in tryAcquireShared. */ final int fullTryAcquireShared(Thread current) { /* * 此代码与tryAcquireShared中的代码有部分相似的地方, * 但总体上更简单,因为不会使tryAcquireShared与重试和延迟读取保持计数之间的复杂判断 * This code is in part redundant with that in * tryAcquireShared but is simpler overall by not * complicating tryAcquireShared with interactions between * retries and lazily reading hold counts. */ HoldCounter rh = null; //死循环 for (;;) { //获取读写锁状态 int c = getState(); //如果有线程获取到了写锁 if (exclusiveCount(c) != 0) { //如果获取写锁的线程不是当前线程,返回失败 if (getExclusiveOwnerThread() != current) return -1; // else we hold the exclusive lock; blocking here // would cause deadlock. } else if (readerShouldBlock()) {//如果没有线程获取到写锁,且读线程要阻塞 // Make sure we're not acquiring read lock reentrantly //如果当前线程为第一个获取到读锁的线程 if (firstReader == current) { // assert firstReaderHoldCount >

0;} else {/ / if the current thread is not the first thread to acquire the read lock (that is, at least one thread has acquired the read lock) / / if (rh = = null) {rh = cachedHoldCounter If (rh = = null | | rh.tid! = getThreadId (current)) {rh = readHolds.get (); if (rh.count = = 0) readHolds.remove () }} if (rh.count = = 0) return-1 }} / * the following is the situation where there is no thread to acquire the write lock and the current thread does not need to block * / / the number of reentrants is equal to the maximum number of reentrants Throw exception if (sharedCount (c) = = MAX_COUNT) throw new Error ("Maximum lock count exceeded") / / if the CAS operation successfully increments the reentrant count of the read-write lock by 1, then increments the reentrant count of the thread currently holding the shared read lock by 1, and then returns the successful if (compareAndSetState (c, c + SHARED_UNIT)) {if (sharedCount (c) = = 0) {firstReader = current FirstReaderHoldCount = 1;} else if (firstReader = = current) {firstReaderHoldCount++;} else {if (rh = = null) rh = cachedHoldCounter If (rh = = null | | rh.tid! = getThreadId (current)) rh = readHolds.get (); else if (rh.count = = 0) readHolds.set (rh); rh.count++; cachedHoldCounter = rh / / cache for release} return 1;}

Release read lock source code:

/ * Releases in shared mode. Implemented by unblocking one or more * threads if {@ link # tryReleaseShared} returns true. * * @ param arg the release argument. This value is conveyed to * {@ link # tryReleaseShared} but is otherwise uninterpreted * and can represent anything you like. * @ return the value returned from {@ link # tryReleaseShared} * / public final boolean releaseShared (int arg) {if (tryReleaseShared (arg)) {/ / attempt to release a shared lock count doReleaseShared (); / / actually release lock return true;} return false;} / * this method means that the reader thread releases the lock. * first determine whether the current thread is the first reader thread firstReader. * if so, determine whether the number of resources occupied by the first reading thread firstReaderHoldCount is 1. If so, set the first reading thread firstReader to be empty, otherwise, subtract 1 from the number of resources occupied by the first reading thread firstReaderHoldCount. If the current thread is not the first read thread, the cache counter (the counter corresponding to the previous read lock thread) is first obtained. If the counter is empty or the tid is not equal to the tid value of the current thread, the counter of the current thread is obtained. If the counter count count is less than or equal to 1, the counter corresponding to the current thread is removed. If the counter count count is less than or equal to 0, an exception is thrown. And then reduce the count. In any case, it will enter an endless loop, which ensures that the status state * / protected final boolean tryReleaseShared (int unused) {/ / gets the current thread Thread current = Thread.currentThread (); if (firstReader = = current) {/ / the current thread is the first reader thread / / assert firstReaderHoldCount > 0 If (firstReaderHoldCount = = 1) / / the number of resources occupied by the reader thread is 1 firstReader = null; else / / reduced resources firstReaderHoldCount--;} else {/ / the counter that the current thread is not the first reader thread / / to get the cache HoldCounter rh = cachedHoldCounter If (rh = = null | | rh.tid! = getThreadId (current)) / / the counter is empty or the tid of the counter is not the tid of the currently running thread / / get the counter corresponding to the current thread rh = readHolds.get (); / / get the count int count = rh.count; if (count

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report