In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you the example analysis of Java concurrent queues, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Concurrent queue
Java concurrent queues can be divided into two categories according to how they are implemented:
Blocking queue
Non-blocking queue
If you have seen the implementation of concurrent series locks, you can already know the difference between their implementations:
The former is based on lock, while the latter is based on CAS non-blocking algorithm.
The common queues are as follows:
Confused in a moment? Do you want to leave when you see this inhuman picture? Don't worry about the objectivity, it will be bright soon.
Right now, you may have a question:
Why are there so many queues?
Locks have locks to deal with all kinds of situations, and queues naturally have queues to deal with all kinds of situations. isn't it a bit of a single responsibility principle?
So we need to understand how these queues are designed. And where did it be used?
Let's take a look at the following picture first.
If you open the above non-blocking queue and blocking queue in IDE and look at their implementation, you will find that blocking queues support two additional operations than non-blocking queues:
Blocking insertion when the queue is full, the queue blocks the thread that inserts the element until the queue is dissatisfied
Blocking removal when the queue is empty, the thread that gets the element blocks until the queue becomes non-empty
A comprehensive description of the seemingly messy methods of joining / leaving the team can be summarized with a table.
Throw an exception
When the queue is full, if you insert an element into the queue again, an IllegalStateException will be thrown (this is easy to understand)
When the queue is empty, if you get the element from the queue again, a NoSuchElementException will be thrown (which is also easy to understand)
Returns a special value
When you insert an element into the queue, it returns whether the element was inserted successfully, and if it succeeds, it returns true.
When an element is removed from the queue, null is returned if not
Always blocked.
When the queue is full, if the producer thread put the queue element, the queue blocks the producer thread until the queue is available or exits in response to an interrupt
When the queue is empty, if the consumer thread take elements from the queue, the queue blocks the consumer thread until the queue is not empty
With regard to blocking, we have already fully explained the waiting notification mechanism for concurrent programming. Do you remember the following picture? The principle is actually the same.
Time-out exit
Like locks, because there is blocking, in order to use it flexibly, timeout exit must be supported. If the blocking time reaches the timeout, it will be returned directly.
As to why so many word representations are inserted and removed, I don't know. To make it easier to remember, you just need to remember the blocking method:
The word put and the letter t of take are connected in the first place, one to put and the other to take
At this point, you should have a preliminary understanding of Java concurrent queues, and the seemingly messy methods seem to have a pattern. Then it's time for a crazy list of knowledge points. With the help of the knowledge of the previous chapters, we can understand all the queues in minutes.
ArrayBlockingQueue
As I said before, the naming in JDK is still very fastidious. If you look at the name, the bottom layer is the implementation of the array. Whether it is bounded or not depends on whether you need to specify a capacity value during construction.
Spoon-feeding instructions are also easy to forget. Where did you see all these? In the first paragraph of the Java docs of all queues, the main characteristics of the queue are summarized in one sentence, so it is strongly recommended that you simply glance at the beginning of the docs when looking at the source code, and you will have more than half in mind.
When talking about the Java AQS queue synchronizer and the application of ReentrantLock, we introduce the concepts of fair lock and unfair lock, ArrayBlockingQueue also has the same concept, look at its construction method, there is ReentrantLock to assist the implementation.
Public ArrayBlockingQueue (int capacity, boolean fair) {if (capacity x = (ScheduledFutureTask) other; long diff = time-x.time; if (diff)
< 0) return -1; else if (diff >0) return 1; else if (sequenceNumber
< x.sequenceNumber) return -1; else return 1; } long diff = getDelay(NANOSECONDS) - other.getDelay(NANOSECONDS); return (diff < 0) ? -1 : (diff >0)? 1: 0;}
Where did the above code come from? If you open the ScheduledFutureTask in ScheduledThreadPoolExecutor, you will see (inside ScheduledThreadPoolExecutor is the application DelayQueue)
So to sum up, the following two situations are very suitable for using DelayQueue
Design of the cache system: use DelayQueue to save the validity period of cache elements, and use a thread to query DelayQueue. If you can get the elements from DelayQueue, it means that the cache is valid.
Scheduled task scheduling: use DelayQueue to save the tasks that will be executed that day and the time. If you can get the elements from the DelayQueue, the task can begin to execute. For example, this is how TimerQueue is implemented.
SynchronousQueue
Is this a blocking queue that does not store elements, which is still called a queue without storing elements?
Yes, SynchronousQueue literally translates as synchronous queue. If you stay in the queue for a long time, it should be regarded as "asynchronous".
So using it, every put () operation must wait for a take () operation, and vice versa, otherwise you cannot continue to add elements
How to use it in practice? If you need to synchronize shared variables between two threads, if you don't use SynchronousQueue, you might choose to do it with CountDownLatch, like this:
ExecutorService executor = Executors.newFixedThreadPool (2); AtomicInteger sharedState = new AtomicInteger (); CountDownLatch countDownLatch = new CountDownLatch (1); Runnable producer = ()-> {Integer producedElement = ThreadLocalRandom .current () .nextInt (); sharedState.set (producedElement); countDownLatch.countDown ();}; Runnable consumer = ()-> {try {countDownLatch.await (); Integer consumedElement = sharedState.get ();} catch (InterruptedException ex) {ex.printStackTrace () }}
This little thing is realized with a counter, which is obviously very inappropriate. If you transform it with SynchronousQueue, it will feel different in an instant.
ExecutorService executor = Executors.newFixedThreadPool (2); SynchronousQueue queue = new SynchronousQueue (); Runnable producer = ()-> {Integer producedElement = ThreadLocalRandom .current () .nextInt (); try {queue.put (producedElement);} catch (InterruptedException ex) {ex.printStackTrace ();}}; Runnable consumer = ()-> {try {current () = queue.take () } catch (InterruptedException ex) {ex.printStackTrace ();}}
Actually, SynchronousQueue is used in the Executors.newCachedThreadPool () method.
Public static ExecutorService newCachedThreadPool () {return new ThreadPoolExecutor (0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue ());}
Why do you see that LinkedBlockingQueue is used on newSingleThreadExecutor and newFixedThreadPool, while newCachedThreadPool uses SynchronousQueue?
Because the number of threads in the single thread pool and the fixed thread pool is limited, the submitted tasks need to wait for the remaining threads in the LinkedBlockingQueue queue
In the cache thread pool, the number of threads is almost unlimited (the upper limit is Integer.MAX_VALUE), so the submitted tasks only need to be handed over synchronously to the free threads in the SynchronousQueue queue, so it is sometimes said that the throughput of SynchronousQueue is higher than that of LinkedBlockingQueue and ArrayBlockingQueue.
LinkedTransferQueue
Simply put, TransferQueue provides a place where the producer thread uses the transfer method to pass in objects and block them until they are all taken out by the consumer thread.
Do you think that the SynchronousQueue just introduced is very much like a TransferQueue with a capacity of 0?
But LinkedTransferQueue has three more methods than other blocking queues.
Transfer (E) if there is a consumer waiting for the consumption element, the transfer method can immediately transfer the element passed by the producer to the consumer; if there is no consumer waiting for the consumption element, then the transfer method will put the element on the tail (tail) node of the queue and block until the element is consumed by the consumer
TryTransfer (E) tryTransfer is obviously an attempt. If there is no consumer waiting for the consumption element, the false will be returned immediately, and the program will not block.
TryTransfer (E, long timeout, TimeUnit unit) with a timeout limit attempts to transfer the element passed by the producer to the consumer. If the timeout time expires and there is no consumer consumption element, false is returned.
You see, all the methods of blocking are the same routine:
Blocking mode
Non-blocking mode with try
Non-blocking mode with try and timeout
You may feel that LinkedTransferQueue is nothing special when you see this, but it is quite different from other blocking queues:
BlockingQueue blocks threads only if the queue is full, but TransferQueue blocks if there are no consuming elements (transfer method)
This is in response to what Doug Lea said:
LinkedTransferQueue is actually a superset of ConcurrentLinkedQueue, SynchronousQueue (in "fair" mode), and unbounded
LinkedBlockingQueues. And it's made better by allowing you to mix and
Match those features as well as take advantage of higher-performance i
Mplementation techniques.
Simple translation:
LinkedTransferQueue is a superset of ConcurrentLinkedQueue, SynchronousQueue (in fair mode), unbounded LinkedBlockingQueues, etc.; allows you to mix the features of blocking queues
So, in the right scenario, try to use LinkedTransferQueue.
All the above are looking at the unidirectional queue FIFO. Next, let's take a look at the bidirectional queue.
LinkedBlockingDeque
LinkedBlockingDeque is a two-way blocking queue made up of linked list structures. Anything with the suffix Deque means two-way queue. When deck--/dek/, first touched it, I thought it was the sound of this ice cream.
The so-called bi-directional queue deserves to be able to insert and remove elements from both ends of the queue. So:
Two-way queue because there is one more entry to the operation queue, when multiple threads join the queue at the same time, the competition will be reduced by half.
The queue has a head and a tail, so it has several more special methods than other blocking queues.
AddFirst
AddLast
XxxxFirs
TxxxxLast
......
From this point of view, the two-way blocking queue is indeed very efficient.
So where is the two-way blocking queue applied?
I don't know if you've ever heard of the "job theft" mode, which seems to be an unkind way, but it's actually a good way to use threads efficiently.
The above is all the content of the article "sample Analysis of Java concurrent queues". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.