In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you what is the use of thread concurrency coordination artifacts CountDownLatch and CyclicBarrier in JDK. I hope you will get something after reading this article. Let's discuss it together.
CountDownLatch I believe we all know that an important feature of good code is that the naming of classes, variables, etc., in the code can be achieved as the name implies, that is to say, when you see the naming, you can roughly know what business semantics the class or variable expresses. In the case of CountDownLatch, its naming vividly represents its capability properties, Count represents counting, Down represents the decreasing operation of the counter, and Latch represents the result action of the decreasing counter. The literal meaning of the combination of CountDownLatch is to open the door bolt after the counter is decremented. Through the description of the following content, in retrospect, you will certainly think that this name is very vivid.
Well, from the name of its class, we speculate that its function is to control threads through the decrement operation of counters, so let's see if that's what the official description means.
/ * *
* A synchronization aid that allows one or more threads to wait until
* a set of operations being performed in other threads completes.
*
*
A {@ code CountDownLatch} is initialized with a given count.
* The {@ link # await await} methods block until the current count reaches
* zero due to invocations of the {@ link # countDown} method, after which
* all waiting threads are released and any subsequent invocations of
* {@ link # await await} return immediately. This is an one-shot phenomenon
*-- the count cannot be reset. If you need a version that resets the
* count, consider using a {@ link CyclicBarrier}.
*.
, /
The general meaning of the above note is that CountDownLatch is a thread synchronizer that allows one or more threads to block waiting until business execution is complete in other threads. CountDownLatch can be initialized with a counter, and he can block the waiting thread until the corresponding counter is set to 0. When the counter is set to 0, the blocked thread is released. In addition, it is an one-time synchronizer, and the counter cannot be reset.
Through the official description of JDK, we can identify the three core features of CountDownLatch:
1. It is a thread synchronizer to coordinate the trigger time of thread execution.
2. It is essentially a counter and a command gun to control threads.
3. It is one-time use and becomes invalid as soon as it is used.
Once we know what CountDownLatch is, let's take a look at its usage scenarios and under what circumstances we can use it to help us solve some problems in our code.
Using the scene as described above, CountDownLatch is like the starting gun fired by the referee on the track and field. When all the contestants are ready, the starting gun goes off and all the athletes hear it. So in the Java multithreading scenario, CountDownLatch is the thread coordinator, and its counter is not reduced to 0. Suppose there is a business scenario where you need to query the alarm information from the alarm service and the ticket information from the ticket service in a monitoring alarm platform, and then analyze which alarms do not have a transfer order. As the old system does, see the simplified pseudocode below:
List alarmList = alarmService.getAlarm (); List workOrderList = workOrderService.getWorkOrder (); List notTransferToWorkOrder = analysis (alarmList, workOrderList)
Can you see what needs to be optimized in this pseudo-code? Let's analyze it together. This code may not have any impact when the amount of data is small, but once the alarm and the large amount of data of the work order, the problem of slow data query may occur in obtaining alarm information or work order information. that will lead to the problem of performance bottleneck in this analysis task. So how should we optimize it? From the business and the code, we can see that obtaining alarm information and obtaining work order information actually do not have the coupling of the business. In the above code, they are executed sequentially, so we need to optimize their performance. Consider executing them in parallel.
Then the modified and optimized pseudo code is as follows:
Executor executor = Executors.newFixedThreadPool (2); executor.execute (()-> {alarmList = alarmService.getAlarm ();}); executor.execute (()-> {workOrderList = workOrderService.getWorkOrder ();}); List notTransferToWorkOrder = analysis (alarmList, workOrderList)
By using thread pool, we execute alarm information and work order information concurrently when we get alarm information and work order information, instead of obtaining alarm information and then obtaining work order information after execution, which is more efficient. However, there are still some problems in this way, because the operation is executed in the online thread, and the actual execution result is not known, so it is difficult to judge the specific time to execute the data analysis. At this time, CountDownLatch comes in handy, using it to realize the waiting of thread picking, and then release the execution of the follow-up logic when the conditions are met. It's like a company organizing team-building and agreeing to assemble at the company gate at 08:30, then the driver will not leave until all the team-building participants arrive at the same time.
The pseudo code after using CountDownLatch is as follows:
Executor executor = Executors.newFixedThreadPool (2); CountDownLatch latch = new CountDownLatch (2); executor.execute (()-> {alarmList = alarmService.getAlarm (); latch.countDown ();}); executor.execute (()-> {workOrderList = workOrderService.getWorkOrder (); latch.countDown ();}); latch.await (); List notTransferToWorkOrder = analysis (alarmList, workOrderList) The underlying implementation principle initialization before using CountDownLatch, we actually do two things in the initialization process, one is to create a synchronization queue of AQS, and the other is to set the state in AQS to count, which is the core variable of AQS (AQS is the underlying implementation basis of concurrent packages, and we will analyze it in the next article).
We can see from the code that the inner class instance of Sync is actually created, while Sync inherits AQS, rewrites the method of locking and unlocking AQS, and calls the method of AQS through the object of Sync, blocking the running of the thread. The code of the inner class of Sync is shown below, where the tryAcquireShared method overrides the template method of AQS, which is mainly used to acquire shared locks. In CountDownLatch, whether the lock is acquired is determined by determining whether the value of the acquired state is 0 or not. If the acquired state is 0, it means that the lock was acquired successfully, and the thread will not block at this time; otherwise, the lock acquisition fails and the thread will block.
Private static final class Sync extends AbstractQueuedSynchronizer {private static final long serialVersionUID = 498226498192 2014374L; Sync (int count) {setState (count);} int getCount () {return getState ();} / / attempt to add a shared lock (judged by state) protected int tryAcquireShared (int acquires) {return (getState () = = 0)? 1:-1 } / / attempt to release the shared lock (judged by state) protected boolean tryReleaseShared (int releases) {/ / Decrement count; signal when transition to zero for (;;) {int c = getState (); if (c = = 0) return false; int nextc = cmur1 If (compareAndSetState (c, nextc)) return nextc = = 0;}} counter decrements as described in the above scenario, and each thread performs a countDown operation after completing its own business, indicating that the thread is ready to complete. Also check to see if the count value is 0. If 0, you need to wake up all waiting threads. As shown in the following code, it actually calls the releaseShared method of the parent class AQS.
The public void countDown () {sync.releaseShared (1);} tryReleaseShared method is actually an attempt to release the lock, if the count is decremented to 0 this time, and then release all threads.
The general code execution logic of public final boolean releaseShared (int arg) {if (tryReleaseShared (arg)) {doReleaseShared (); return true;} return false;} can be found in the following figure:
The function of the blocking thread await is to block the current thread until the count value is reduced to 0. It actually calls the tryAcquireSharedNanos method of the inner class, which is actually the method in the parent class AQS of the Sync class.
Public void await () throws InterruptedException {sync.acquireSharedInterruptibly (1);} AQS provides a way to acquire fair locks in response to interrupts. TryAcquireShared has been described above, the role of this method is to try to acquire a shared lock, if the acquisition fails, the thread will be added to the AQS synchronization queue to wait, also known as thread blocking.
Public final void acquireSharedInterruptibly (int arg) throws InterruptedException {if (Thread.interrupted ()) throw new InterruptedException (); if (tryAcquireShared (arg) < 0) doAcquireSharedInterruptibly (arg);} CyclicBarrier let's understand it from the literal meaning of CyclicBarrier, Cyclic is the meaning of cycle and Barrier means fence, barrier, which literally means recyclable fence. It's the same old routine. Before we go into CyclicBarrier, let's take a look at how JDK describes it.
/ * *
* A synchronization aid that allows a set of threads to all wait for
* each other to reach a common barrier point. CyclicBarriers are
* useful in programs involving a fixed sized party of threads that
* must occasionally wait for each other. The barrier is called
* cyclic because it can be re-used after the waiting threads
* are released.
*
*
A {@ code CyclicBarrier} supports an optional {@ link Runnable} command
* that is run once per barrier point, after the last thread in the party
* arrives, but before any threads are released.
* This barrier action is useful
* for updating shared-state before any of the parties continue.
*.
* * /
From the description of JDK, we can see that CyclicBarrier is also a thread synchronization coordinator to coordinate the execution of a set of processes. When a specified number of threads reach the fence, you can release the fence and end the thread blocking state. So it looks almost the same as CountDownLatch, but there is actually a difference. CyclicBarrier is recyclable, while CountDownLatch is disposable. Let's take a look at the core attributes of CyclicBarrier.
/ / the lock on the fence entrance private final ReentrantLock lock = new ReentrantLock (); / / the thread waiting condition private final Condition trip = lock.newCondition (); / / the number of threads intercepted by private final int parties;// before the arrival of the next fence algebra private final Runnable barrierCommand;// 's current fence algebra private Generation generation = new Generation (); the source code implementation of CyclicBarrier is more or less the same as CountDownLatch, CountDownLatch is based on the use of AQS's shared mode, while CyclicBarrier is implemented based on Condition.
The parties and count variables are maintained internally in CyclicBarrier. Parties represents the number of threads that need to be intercepted each time you participate in a Generation, while count is an internal counter. Count is equal to parties at initialization, and the counter count is subtracted by 1 each time the await method is called, which is similar to countDown above.
Let's take the business scenario above as an example. We implemented the problem of thread coordination between querying alarm information and querying ticket information through CountDownLatch, but a new problem has emerged. Because alarm information and work order information are generated in real time, and the implementation method of CountDownLatch can only complete thread coordination once, if the subsequent alarm information and work order information need to be queried before data analysis, it can not help. That is, if we need to wait for each other to finish continuously before performing subsequent business operations, we need to use CyclicBarrier to implement our requirements.
There are two kinds of constructors for initializing CyclicBarrier, one is to specify the number of threads that need to be coordinated each time and the execution of subsequent tasks after unblocking, and the other is to set the number of threads that need to be coordinated and not to set the tasks to be executed later.
Public CyclicBarrier (int parties, Runnable barrierAction) {if (parties 0L) nanos = trip.awaitNanos (nanos);} catch (InterruptedException ie) {if (g = = generation & &! G.broken) {breakBarrier (); throw ie;} else {/ / We're about to finish waiting even if we had not / / been interrupted, so this interrupt is deemed to / / "belong" to subsequent execution. Thread.currentThread (). Interrupt ();}} if (g.broken) throw new BrokenBarrierException (); if (g! = generation) return index; if (timed & & nanos
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.