In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to master AQS". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to master AQS".
Lock principle-semaphore vs pipe
In the field of concurrent programming, there are two core problems: mutex and synchronization, mutex, that is, only one thread is allowed to access shared resources at the same time, and synchronization, that is, how to communicate and cooperate between threads. in general, these two problems can be solved through semaphores and pipes.
Semaphore
Semaphore (Semaphore) is a common way of communication between processes provided by the operating system, which is mainly used to coordinate the access of concurrent programs to shared resources, and the operating system can ensure the atomicity of semaphore operations. How did it come true.
The semaphore consists of a shared integer variable S and two atomic operations PV. S can only be changed by P and V operations.
P operation: that is, to request resources, which means that S should be minus 1, if S
< 0, 则表示没有资源了,此时线程要进入等待队列(同步队列)等待 V 操作: 即释放资源,意味着 S 要加 1, 如果 S 小于等于 0,说明等待队列里有线程,此时就需要唤醒线程。 示意图如下 信号量机制的引入解决了进程同步和互斥问题,但信号量的大量同步操作分散在各个进程中不便于管理,还有可能导致系统死锁。如:生产者消费者问题中将P、V颠倒可能死锁(见文末参考链接),另外条件越多,需要的信号量就越多,需要更加谨慎地处理信号量之间的处理顺序,否则很容易造成死锁现象。 基于信号量给编程带来的隐患,于是有了提出了对开发者更加友好的并发编程模型-管程 管程 Dijkstra 于 1971 年提出:把所有进程对某一种临界资源的同步操作都集中起来,构成一个所谓的秘书进程。凡要访问该临界资源的进程,都需先报告秘书,由秘书来实现诸进程对同一临界资源的互斥使用,这种机制就是管程。 管程是一种在信号量机制上进行改进的并发编程模型,解决了信号量在临界区的 PV 操作上配对的麻烦,把配对的 PV 操作集中在一起而形成的并发编程方法理论,极大降低了使用和理解成本。 鸿蒙官方战略合作共建--HarmonyOS技术社区 管程由四部分组成: 管程内部的共享变量。 管程内部的条件变量。 管程内部并行执行的进程。 对于局部与管程内部的共享数据设置初始值的语句。 由此可见,管程就是一个对象监视器。任何线程想要访问该资源(共享变量),就要排队进入监控范围。进入之后,接受检查,不符合条件,则要继续等待,直到被通知,然后继续进入监视器。 需要注意的事,信号量和管程两者是等价的,信号量可以实现管程,管程也可以实现信号量,只是两者的表现形式不同而已,管程对开发者更加友好。 两者的区别如下 管程为了解决信号量在临界区的 PV 操作上的配对的麻烦,把配对的 PV 操作集中在一起,并且加入了条件变量的概念,使得在多条件下线程间的同步实现变得更加简单。 怎么理解管程中的入口等待队列,共享变量,条件变量等概念,有时候技术上的概念较难理解,我们可以借助生活中的场景来帮助我们理解,就以我们的就医场景为例来简单说明一下,正常的就医流程如下: 病人去挂号后,去侯诊室等待叫号 叫到自己时,就可以进入就诊室就诊了 就诊时,有两种情况,一种是医生很快就确定病人的病,并作出诊断,诊断完成后,就通知下一位病人进来就诊,一种是医生无法确定病因,需要病人去做个验血 / CT 检查才能确定病情,于是病人就先去验个血 / CT 病人验完血 / 做完 CT 后,重新取号,等待叫号(进入入口等待队列) 病人等到自己的号,病人又重新拿着验血 / CT 报告去找医生就诊 整个流程如下 那么管程是如何解决互斥和同步的呢 首先来看互斥,上文中医生即共享资源(也即共享变量),就诊室即为临界区,病人即线程,任何病人如果想要访问临界区,必须首先获取共享资源(即医生),入口一次只允许一个线程经过,在共享资源被占有的情况下,如果再有线程想占有共享资源,就需要到等待队列去等候,等到获取共享资源的线程释放资源后,等待队列中的线程就可以去竞争共享资源了,这样就解决了互斥问题,所以本质上管程是通过将共享资源及其对共享资源的操作(线程安全地获取和释放)封装起来来保证互斥性的。 再来看同步,同步是通过文中的条件变量及其等待队列实现的,同步的实现分两种情况 病人进入就诊室后,无需做验血 / CT 等操作,于是医生诊断完成后,就会释放共享资源(解锁)去通知(notify,notifyAll)入口等待队列的下一个病人,下一个病人听到叫号后就能看医生了。 如果病人进入就诊室后需要做验血 / CT 等操作,会去验血 / CT 队列(条件队列)排队, 同时释放共享变量(医生),通知入口等待队列的其他病人(线程)去获取共享变量(医生),获得许可的线程执行完临界区的逻辑后会唤醒条件变量等待队列中的线程,将它放到入口等待队列中 ,等到其获取共享变量(医生)时,即可进入入口(临界区)处理。 在 Java 里,锁大多是依赖于管程来实现的,以大家熟悉的内置锁 synchronized 为例,它的实现原理如下。 可以看到 synchronized 锁也是基于管程实现的,只不过它只有且只有一个条件变量(就是锁对象本身)而已,这也是为什么JDK 要实现 Lock 锁的原因之一,Lock 支持多个条件变量。 通过这样的类比,相信大家对管程的工作机制有了比较清晰的认识,为啥要花这么大的力气介绍管程呢,一来管程是解决并发问题的万能钥匙,二来 AQS 是基于 Java 并发包中管程的一种实现,所以理解管程对我们理解 AQS 会大有帮助,接下来我们就来看看 AQS 是如何工作的。 AQS 实现原理 AQS 全称是 AbstractQueuedSynchronizer,是一个用来构建锁和同步器的框架,它维护了一个共享资源 state 和一个 FIFO 的等待队列(即上文中管程的入口等待队列),底层利用了 CAS 机制来保证操作的原子性。 AQS 实现锁的主要原理如下: 以实现独占锁为例(即当前资源只能被一个线程占有),其实现原理如下:state 初始化 0,在多线程条件下,线程要执行临界区的代码,必须首先获取 state,某个线程获取成功之后, state 加 1,其他线程再获取的话由于共享资源已被占用,所以会到 FIFO 等待队列去等待,等占有 state 的线程执行完临界区的代码释放资源( state 减 1)后,会唤醒 FIFO 中的下一个等待线程(head 中的下一个结点)去获取 state。 state 由于是多线程共享变量,所以必须定义成 volatile,以保证 state 的可见性, 同时虽然 volatile 能保证可见性,但不能保证原子性,所以 AQS 提供了对 state 的原子操作方法,保证了线程安全。 另外 AQS 中实现的 FIFO 队列(CLH 队列)其实是双向链表实现的,由 head, tail 节点表示,head 结点代表当前占用的线程,其他节点由于暂时获取不到锁所以依次排队等待锁释放。 所以我们不难明白 AQS 的如下定义: public abstract class AbstractQueuedSynchronizer extends AbstractOwnableSynchronizer implements java.io.Serializable { // 以下为双向链表的首尾结点,代表入口等待队列 private transient volatile Node head; private transient volatile Node tail; // 共享变量 state private volatile int state; // cas 获取 / 释放 state,保证线程安全地获取锁 protected final boolean compareAndSetState(int expect, int update) { // See below for intrinsics setup to support this return unsafe.compareAndSwapInt(this, stateOffset, expect, update); } // ... } AQS 源码 剖析ReentrantLock 是我们比较常用的一种锁,也是基于 AQS 实现的,所以接下来我们就来分析一下 ReentrantLock 锁的实现来一探 AQS 究竟。本文将会采用图文并茂的方式让大家理解 AQS 的实现原理,大家在学习过程中,可以多类比一下上文中就诊的例子,相信会有助于理解。 首先我们要知道 ReentrantLock 是独占锁,也有公平和非公平两种锁模式,什么是独占与有共享模式,什么又是公平锁与非公平锁 与独占锁对应的是共享锁,这两者有什么区别呢 独占锁:即其他线程只有在占有锁的线程释放后才能竞争锁,有且只有一个线程能竞争成功(医生只有一个,一次只能看一个病人) 共享锁:即共享资源可以被多个线程同时占有,直到共享资源被占用完毕(多个医生,可以看多个病人),常见的有读写锁 ReadWriteLock, CountdownLatch,两者的区别如下 什么是公平锁与非公平锁 还是以就医为例,所谓公平锁即大家取号后老老实实按照先来后到的顺序在侯诊室依次等待叫号,如果是非公平锁呢,新来的病人(线程)很霸道,不取号排队 ,直接去抢先看病,占有医生(不一定成功)Fair lock and unfair lock
In this paper, we will focus on the source code implementation of exclusive and unfair mode, and do not analyze the implementation of shared mode and Condition, because we analyze the implementation of exclusive lock. Because the principles are similar, it is not difficult to analyze sharing and Condition.
First, let's take a look at how to use ReentrantLock.
/ / 1. Initialize the reentrant lock private ReentrantLock lock = new ReentrantLock (); public void run () {/ / lock lock.lock (); try {/ / 2. Execute the critical section code} catch (InterruptedException e) {e.printStackTrace ();} finally {/ / 3. Unlock lock.unlock ();}}
The first step is to initialize the reentrant lock. You can see that the unfair locking mechanism is used by default.
Public ReentrantLock () {sync = new NonfairSync ();}
Of course, you can also specify the use of fair locks with the following constructions:
Public ReentrantLock (boolean fair) {sync = fair? New FairSync (): new NonfairSync ();}
Voiceover: FairSync and NonfairSync are internal classes implemented by ReentrantLock, which refer to fair and unfair modes, respectively. Lock and unlock of ReentrantLock ReentrantLock are specifically called FairSync,NonfairSync locking and unlocking methods.
The relationships of several classes are as follows:
Let's first analyze the implementation of unfair locks (NonfairSync). Let's take a look at the second step of the above sample code: locking. Since the default is unfair locking, let's analyze how unfair locks are locked.
You can see that the lock method has two main steps
Use CAS to obtain state resources. If you successfully set 1, it means that the state resource acquired the lock successfully, and record the thread setExclusiveOwnerThread (Thread.currentThread ()) that currently occupies state.
If CAS fails to set state to 1 (which means it failed to acquire the lock), execute the acquire (1) method, which is the method provided by AQS, as follows
Public final void acquire (int arg) {if (! tryAcquire (arg) & & acquireQueued (addWaiter (Node.EXCLUSIVE), arg) selfInterrupt ();}
TryAcquire analysis
First call tryAcquire to try to get the state, and if successful, skip the following steps. If it fails, execute acquireQueued to join the thread in the CLH waiting queue.
First, let's take a look at the tryAcquire method, which is a template method provided by AQS and is finally implemented by its AQS implementation class (Sync). Because the unfair locking logic is executed, the executed code is as follows:
Final boolean nonfairTryAcquire (int acquires) {final Thread current = Thread.currentThread (); int c = getState (); if (c = = 0) {/ / if c equals 0, it means that the resource is free (that is, the lock is released), and then use CAS to obtain the lock if (compareAndSetState (0, acquires)) {setExclusiveOwnerThread (current); return true }} else if (current = = getExclusiveOwnerThread ()) {/ / this condition indicates that a thread has acquired the lock before, and this thread has acquired the lock again, plus 1 times of acquiring resources, which also proves that ReentrantLock is a reentrant lock int nextc = c + acquires; if (nextc).
< 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } 此段代码可知锁的获取主要分两种情况 state 为 0 时,代表锁已经被释放,可以去获取,于是使用 CAS 去重新获取锁资源,如果获取成功,则代表竞争锁成功,使用 setExclusiveOwnerThread(current) 记录下此时占有锁的线程,看到这里的 CAS,大家应该不难理解为啥当前实现是非公平锁了,因为队列中的线程与新线程都可以 CAS 获取锁啊,新来的线程不需要排队 如果 state 不为 0,代表之前已有线程占有了锁,如果此时的线程依然是之前占有锁的线程(current == getExclusiveOwnerThread() 为 true),代表此线程再一次占有了锁(可重入锁),此时更新 state,记录下锁被占有的次数(锁的重入次数),这里的 setState 方法不需要使用 CAS 更新,因为此时的锁就是当前线程占有的,其他线程没有机会进入这段代码执行。所以此时更新 state 是线程安全的。 假设当前 state = 0,即锁不被占用,现在有 T1, T2, T3 这三个线程要去竞争锁 假设现在 T1 获取锁成功,则两种情况分别为 1、 T1 首次获取锁成功2. T1 successfully acquires the lock again, and state plus 1 indicates that the lock has been reentered twice. Currently, if T1 has been successfully applying to occupy the lock, state will continue to accumulate.
AcquireQueued analysis
If the execution of tryAcquire (arg) fails, which means that it failed to acquire the lock, the acquireQueued method is executed to add the thread to the FIFO waiting queue.
Public final void acquire (int arg) {if (! tryAcquire (arg) & & acquireQueued (addWaiter (Node.EXCLUSIVE), arg) selfInterrupt ();}
So let's take a look at the execution logic of acquireQueued. First, we will call addWaiter (Node.EXCLUSIVE) to queue the Node node containing the current thread. Node.EXCLUSIVE represents that this node is in exclusive mode.
Let's take a look at how addWaiter is implemented.
Private Node addWaiter (Node mode) {Node node = new Node (Thread.currentThread (), mode); Node pred = tail; / / if the tail node is not empty, the thread that failed to acquire the lock will be queued with CAS (pred! = null) {node.prev = pred; if (compareAndSetTail (pred, node)) {pred.next = node; return node }} / / if the node is empty, execute enq method enq (node); return node;}
This logic is relatively clear. The first step is to get the tail node of the FIFO queue. If the tail node exists, CAS will be used to wait for the thread to join the queue. If the tail node is empty, the enq method will be executed.
Private Node enq (final Node node) {for (;;) {Node t = tail; if (t = = null) {/ / tail node is empty, indicating that the FIFO queue is not initialized, so initialize its header node if (compareAndSetHead (new Node () tail = head first. } else {/ / tail node is not empty, it will wait for the thread to join the queue node.prev = t; if (compareAndSetTail (t, node)) {t.next = node; return t;}
First, determine whether the tail is empty. If it is empty, it means that the head,tail of the FIFO queue has not been built, then build the header node first, and then add the thread node to the queue in the way of CAS.
When you create a head node using CAS, you simply call the new Node () method and do not record the thread like other nodes do, which is why
Because the head node is a virtual node, it only represents that there is a thread occupying the state. As for which thread occupies the state, it actually calls setExclusiveOwnerThread (current) above, which is recorded in the exclusiveOwnerThread attribute.
After the execution of addWaiter, the thread successfully joined the queue. Now it's time to look at the last and most critical method, acquireQueued. This method is a bit difficult to understand. Let's first use three threads to simulate the corresponding steps of the previous code.
1. Suppose T1 acquires the lock successfully, and since FIFO is not initialized at this time, create the head node first.
2. T2 or T3 fail to compete for state and join the team, as shown below:
Okay, now the question is, how to deal with T2PowerT3 after joining the queue? does it block immediately? blocking immediately means that the thread changes from running mode to blocking mode, which involves switching from user mode to kernel state, and after waking up, it also has to change from kernel mode to user mode, which is relatively expensive, so the way AQS adopts for this kind of queued threads is to let them spin to compete for locks, as shown in the following figure.
But smart you may find a problem, if the current lock is an exclusive lock, if the lock has been occupied by T1, T2T 3 has been spinning does not make much sense, but will occupy CPU, affecting performance, so it is more appropriate for them to spin once or twice to compete with the lock and interestingly block after waiting for the front node to release the lock before waking it up.
In addition, if the lock is interrupted during the spin, or if the spin times out, it should be in the "cancel" state.
Based on the possible state of each Node, AQS defines a variable waitStatus for it, and performs relevant operations on the corresponding node according to the value of this variable. Let's take a look at the values of this variable. It is time to look at the attribute definition of a Node node.
Static final class Node {static final Node SHARED = new Node (); / / identifies that the waiting node is in shared mode static final Node EXCLUSIVE = null;// identifies that the waiting node is in exclusive mode static final int CANCELLED = 1; / / the node has been cancelled due to timeout or interruption static final int SIGNAL =-1 / / Node blocking (park) can only occur when its precursor node is SIGNAL. If the node is SIGNAL, after it releases the lock or cancels, the next node can be awakened by unpark. Static final int CONDITION =-2 / indicates that the thread is waiting for the conditional variable (first acquire the lock, join the conditional waiting queue, then release the lock, and wait for the conditional variable to meet the condition. Can only be returned after re-acquiring the lock) static final int PROPAGATE =-3 volatile int waitStatus / indicates that the subsequent node will propagate the wake-up operation, and / / wait state in shared mode: for condition nodes, initialize to CONDITION; in other cases, default is 0, and update volatile int waitStatus by atom through CAS operation
Through the definition of state, we can guess how AQS handles thread spin: if the last node of the current node is not head and its state is SIGNAL, the node enters a blocking state. Let's take a look at the code to prove our guess:
Final boolean acquireQueued (final Node node, int arg) {boolean failed = true; try {boolean interrupted = false; for (;;) {final Node p = node.predecessor () / / if the previous node is head, try to spin the lock if (p = = head & & tryAcquire (arg)) {/ / point the head node to the current node, and the original head node is setHead (node); p.next = null; / / help GC failed = false; return interrupted } / / if the previous node is not head or the contention lock fails, it enters the blocking state if (shouldParkAfterFailedAcquire (p, node) & & parkAndCheckInterrupt ()) interrupted = true }} finally {if (failed) / / call this method cancelAcquire (node) if the lock acquisition fails due to exceptions and other reasons in the thread spin;}}
Let's take a look at the first case, if the previous node of the current node is a head node and the tryAcquire is successfully acquired
You can see that the main processing is to point the head to the current node and get the original head node out of queue, so that because the original head is not reachable, it will be garbage collected.
Pay attention to the processing of setHead
Private void setHead (Node node) {head = node; node.thread = null; node.prev = null;}
After setting head as the current node, set the thread and pre of the node to null, because as analyzed before, head is a virtual node and does not save other information except waitStatus (node state). Therefore, thread and pre are set to empty here, because the thread holding the lock is recorded by exclusiveThread. If head records the thread again, it is not only unnecessary, but also needs to operate the thread release of head when releasing the lock.
If the previous node is not head or the contention lock fails, first call the shouldParkAfterFailedAcquire method to determine whether the lock should stop spinning into the blocking state:
Private static boolean shouldParkAfterFailedAcquire (Node pred, Node node) {int ws = pred.waitStatus; if (ws = = Node.SIGNAL) / / 1. If the state of the leading vertex is SIGNAL, the current node can block return true; if (ws > 0) {/ / 2. Remove the canceled node do {node.prev = pred = pred.prev;} while (pred.waitStatus > 0); pred.next = node;} else {/ / 3. If the ws of the front node is not 0, it is set to SIGNAL, compareAndSetWaitStatus (pred, ws, Node.SIGNAL);} return false;}
This piece of code is a little roundabout, so you need to use your head a little bit and follow the above steps step by step.
1. First of all, we need to understand that according to the previous comments of the Node class, if the precursor node is SIGNAL, the current node can enter the blocking state.
As shown in the figure: the waitStatus of the precursor node of T2 ~ T3 is all SIGNAL, so T _ 2 ~ T3 can be blocked at this time.
2. If the precursor node is canceled, then the precursor node needs to be removed. These use a more ingenious method to remove all the nodes whose waitStatus is canceled before the current node. Assuming that there are four threads as follows, and T2Mague T3 is canceled, then after executing the logic, as shown in the following figure, the T2jue T3 node will be GC.
3. If the precursor node is less than or equal to 0, you need to set its precursor node to SIGNAL first, because as we analyzed earlier, one of the conditions for the current node to enter blocking is that the precursor node must be SIGNAL, so that if the precursor node is found to be SIGNAL after the next spin, it will return true (step 1)
If shouldParkAfterFailedAcquire returns true, it means that the thread can enter the blocking interrupt, so the next step is that parkAndCheckInterrupt should let the thread block.
Private final boolean parkAndCheckInterrupt () {/ / blocks thread LockSupport.park (this); / / returns whether the thread has been interrupted, and clears the interrupt state (an interrupt will be made up after the lock is acquired) return Thread.interrupted ();}
The blocking thread here is easy to understand, but why judge whether the thread has been interrupted, because if the thread receives an interrupt during blocking, you need to add an interrupt after waking up (switching to running mode) to acquire the lock (acquireQueued is true), as shown below:
Public final void acquire (int arg) {if (! tryAcquire (arg) & & acquireQueued (addWaiter (Node.EXCLUSIVE), arg)) / / if it is because of interrupting the awakened thread, you need to make up for the interrupt selfInterrupt () after acquiring the lock;}
At this point, the process of acquiring locks has been analyzed, but there is still one confusion that we have not solved: didn't we say that if the Node status is cancelled, it will be cancelled, and when will the Node be set to cancel?
Looking back at acquireQueued
Final boolean acquireQueued (final Node node, int arg) {boolean failed = true; try {/ / omit spin acquisition lock code} finally {if (failed) / / call this method cancelAcquire (node);} if the lock acquisition fails due to exceptions and other reasons in the thread spin
Look at the last cancelAcquire method, which will be called if the lock acquisition fails due to exceptions and other reasons in the thread spin
Private void cancelAcquire (Node node) {/ / if the node is empty, directly return if (node = = null) return; / / since the thread is about to be cancelled, clear the thread thread node.thread = null / / the following step indicates that the pre of node is directed to the node in the first non-canceled state (that is, all nodes in canceled state are skipped). WaitStatus > 0 means that the current node state is canceled Node pred = node.prev; while (pred.waitStatus > 0) node.prev = pred = pred.prev. / / get the next node of the filtered pre. This step is mainly used on the next node of the CAS setting pre Node predNext = pred.next; / / set the current node to cancel state node.waitStatus = Node.CANCELLED / / if the current cancellation node is a tail node, use CAS to set the tail node to its precursor node. If the setting is successful, the next pointer of the tail node is set to null if (node = = tail & & compareAndSetTail (node, pred)) {compareAndSetNext (pred, predNext, null) } else {/ / this step is a bit roundabout. Let's think about it. If the current node is canceled, is it necessary to point the precursor node of the current node to the successor node of the current node? but as we have said before, to wake up or block a node, it can only operate under the condition that the state of its precursor node is SIGNAL, so when setting the next node of the pre, we should ensure that the state of the pre node is SIGNAL. Having figured this out, I believe it is not difficult for you to understand the following code. Int ws; if (pred! = head & & (ws = pred.waitStatus) = = Node.SIGNAL | | (ws 0); pred.next = node;} return false;}
This code points the pre of node to a node whose waitStatus was previously non-CANCEL
So when T4 executes this code, it becomes the following
You can see that the two CANCEL nodes in the middle are unreachable and will be GC
3. If the current node is a tail node, the result is as follows. In this case, the current node is unreachable and will be GC
4. If the current node is the successor node of head, the following
The CANCEL node in the result will also be unreachable after the tail node spin calls shouldParkAfterFailedAcquire, as follows
Since then we have finally analyzed the lock acquisition process, and let's take a look at how the lock is released.
Lock release
Regardless of whether it is a fair lock or an unfair lock, the lock is eventually released by calling the following template method of AQS
/ / java.util.concurrent.locks.AbstractQueuedSynchronizer public final boolean release (int arg) {/ / whether the lock release was successful if (tryRelease (arg)) {Node h = head; if (h! = null & & h.waitStatus! = 0) unparkSuccessor (h); return true;} return false;}
The tryRelease method is defined in the subclass Sync method of AQS
/ / java.util.concurrent.locks.ReentrantLock.Sync protected final boolean tryRelease (int releases) {int c = getState ()-releases; / / only the thread holding the lock can release the lock, so if the current lock is not the thread holding the lock, throw an exception if (Thread.currentThread ()! = getExclusiveOwnerThread ()) throw new IllegalMonitorStateException (); boolean free = false / / indicates that all locks held by the thread have been released, and the holding thread if (c = = 0) of the exclusiveOwnerThread needs to be released {free = true; setExclusiveOwnerThread (null);} setState (c); return free;}
What to do after the lock is released successfully is obviously to wake up the node after head and let it compete for the lock.
/ / java.util.concurrent.locks.AbstractQueuedSynchronizer public final boolean release (int arg) {/ / whether the lock release is successful if (tryRelease (arg)) {Node h = head; if (h! = null & & h.waitStatus! = 0) / / after the lock release is successful, wake up the node after the head to compete for lock unparkSuccessor (h); return true;} return false }
Why is the condition for releasing the lock h! = null & & h.waitStatus! = 0?
If h = = null, there are two possibilities, one is that a thread is competing for the lock, and now it is released, of course, there is no so-called wake-up successor node, one is that other threads are running competitive locks, but the header node has not been initialized yet, and since other threads are running, there is no need to perform wake-up operations.
If h! = null and h.waitStatus = = 0, the successor node of head is spinning for lock, that is, the thread is running and does not need to wake up.
If h! = null and h.waitStatus
< 0, 此时 waitStatus 值可能为 SIGNAL,或 PROPAGATE,这两种情况说明后继结点阻塞需要被唤醒 来看一下唤醒方法 unparkSuccessor: private void unparkSuccessor(Node node) { // 获取 head 的 waitStatus(假设其为 SIGNAL),并用 CAS 将其置为 0,为啥要做这一步呢,之前我们分析过多次,其实 waitStatus = SIGNAL(< -1)或 PROPAGATE(-·3) 只是一个标志,代表在此状态下,后继节点可以唤醒,既然正在唤醒后继节点,自然可以将其重置为 0,当然如果失败了也不影响其唤醒后继结点 int ws = node.waitStatus; if (ws < 0) compareAndSetWaitStatus(node, ws, 0); // 以下操作为获取队列第一个非取消状态的结点,并将其唤醒 Node s = node.next; // s 状态为非空,或者其为取消状态,说明 s 是无效节点,此时需要执行 if 里的逻辑 if (s == null || s.waitStatus >0) {s = null; / / the following operation is to get the last non-canceled node for (Node t = tail; t! = null & & t! = node; t = t.prev) if (t.waitStatus)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.