In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "how to understand Java concurrent queues and containers". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to understand Java concurrent queues and containers.
BlockingQueue
Blocking queue, which is located under the concurrent packet of java.util.concurrent, solves the problem of safe and efficient data transmission in multithreading. The so-called "blocking" means that in some cases, a thread is suspended and automatically awakened when certain conditions are met, which can be controlled by API. Common blocking queues are mainly divided into two types: FIFO (first in first out) and LIFO (last in first out). Of course, through different implementation methods, many different types of queues can be extended. First of all, take a look at several core API:put and take of BlockingQueue, a pair of blocking access, and a pair of add and poll pair of non-blocking access. Insert data put (anObj): add anObj to BlockingQueue. If BlockQueue has no space, the thread calling this method is blocked until there is space in BlockingQueue and then continue to insert add (anObj): add anObj to BlockingQueue, if BlockingQueue can accommodate, return true, otherwise throw exception offer (anObj): if possible, add anObj to BlockingQueue, if BlockingQueue can accommodate, return true, otherwise return false. Read data take (): take the first object in the BlockingQueue. If BlockingQueue is empty, block entering the waiting state until a new object is added to the Blocking.
Poll (time): take the object that ranks first in the BlockingQueue. If it cannot be removed immediately, you can wait for the time specified in the time parameter, and return null if it is not available.
BlockingQueue core members introduce ArrayBlockingQueue's bounded blocking queues based on arrays. Because it is based on array implementation, it has the characteristics of fast search and slow increase and deletion. Producers and consumers use the same lock, so it is inefficient to execute in parallel. The underlying layer uses a standard mutex lock ReentrantLock, that is, read, read, write and write mutually exclusive. Of course, you can control whether a fair lock is used inside the object. The default is an unfair lock. The mode of consumption is FIFO.
When producing and consuming data, enumerated objects are inserted or deleted directly without generating or destroying additional object instances. Application: because the underlying production and elimination costs are the same lock, fixed-length arrays do not need to create and destroy objects frequently, so they are suitable for situations where you want to execute tasks in queue order and do not want frequent GC scenarios. The blocking queue implemented by LinkedBlockingQueue based on linked list also has the characteristics of fast addition and deletion and slow positioning. It is important to note that the LinkedBlockingQueue capacity created by default is Integer.MAX_VALUE, in which case, once the speed of the producer is faster than that of the consumer, the system memory may have been exhausted before the queue full blocking occurs. This extreme situation can be avoided by creating a LinkedBlockingQueue with specified capacity. Although ReentrantLock is also used in the underlying layer, take and put are separate (production and consumption locks are not the same lock), and the efficiency is still higher than ArrayBlockingQueue in high concurrency scenarios. The put method blocks when the queue is full until a queue member is consumed, and the take method blocks when the queue is empty until a queue member is put in. DelayQueue
DelayQueue is a queue with no size limit, so operations that insert data into the queue (producers) are never blocked, but only operations that get data (consumers) are blocked. An element in DelayQueue that can be retrieved from the queue only if the specified delay time expires.
Application scenarios: 1. The problem that the client occupies the connection for a long time, beyond this idle time, can be removed by 2. 5%. Handle caches that are not used for a long time: if the objects in the queue are not used for a long time and exceed the idle time, remove them
3. Task timeout processing
PriorityBlockingQueuePriorityBlockingQueue does not block data producers, but only consumers of data when there is no data to consume. Therefore, it is necessary to control the speed of producer production data to avoid consumers lagging behind, otherwise over a long period of time, all available heap memory space will be exhausted eventually. When you add an element to PriorityBlockingQueue, the element defines the logic of priority by implementing the Comparable interface and overriding compareTo () in the implementation. It internally controls thread synchronization locks using fair locks. SynchronousQueue is an unbuffered waiting queue that executes a task during which no tasks can be added. That means no blocking, which is actually more efficient for a small number of tasks.
Declare that there are two different ways to declare an SynchronousQueue, fair mode and unfair mode:
Fairness model: SynchronousQueue will use fair lock and cooperate with a FIFO queue to block redundant producers and consumers, thus reflecting the overall fairness strategy. Unfair mode (SynchronousQueue default): SynchronousQueue uses unfair locks and cooperates with a LIFO queue to manage redundant producers and consumers, while in the latter mode, if there is a gap in processing speed between producers and consumers, it is easy to have hunger and thirst, that is, the data of some producers or consumers may never be processed. ConcurrentLinkedQueue is not locked, and the efficiency of high concurrency scenarios is much higher than that of containers such as ArrayBlockingQueue and LinkedBlockingQueue. Class I: Vector, Stack, HashTable are all synchronous classes and thread safe, but problems such as ConcurrentModificationException may still occur in high concurrency scenarios.
The second category: some factory classes (static) provided by Collections are inefficient
Concurrent class container
The container copied when the CopyOnWrite container is written: when we add elements to a container, we do not add elements directly to the current container, but first copy the current container to copy a new container, and then add elements to the new container. After adding elements, we point the reference of the original container to the new container, which is very suitable for scenarios with more reading and less writing. But at the same time, there are the following problems: data consistency: the CopyOnWrite container is weakly consistent, that is, it can only guarantee the final consistency of the data, not the real-time consistency of the data. So if you want the written data to be read instantly, do not use the CopyOnWrite container. Memory footprint problem: because of CopyOnWrite's write-time replication mechanism, when writing, the memory of both objects, the old object and the newly written object, will be stationed in memory. If these objects take up a large amount of memory, if they are not well controlled, such as writing too many scenarios, it is likely to result in frequent Yong GC and Full GC. To solve the memory footprint problem, you can reduce the memory consumption of large objects by compressing the elements in the container, or you can use other concurrent containers, such as ConcurrentHashMap, instead of CopyOnWrite containers.
There are two common CopyOnWrite containers: CopyOnWriteArrayList and CopyOnWriteArraySet, where CopyOnWriteArrayList is a thread-safe variant of ArrayList.
ConcurrentHashMap
The author is divided into two parts: JDK1.7 and JDK1.8 to explain ConcurrentHashMap. JDK1.7 ConcurrentHashMapJDK1.7 uses "lock segmentation" technology to reduce the granularity of locks, which divides the whole map into a series of units composed of segment, and a segment is equivalent to a hashtable. In this way, the locked object changes from the entire map to a segment. The reason why ConcurrentHashMap is thread safe and improves performance is that reads in map are concurrent and do not need to be locked; locks are added only during put and remove operations, while locking only locks the segment that needs to be operated and does not affect the read and write of other segment. Therefore, different segment can be used concurrently, which greatly improves the performance. According to the source code, the process of searching, inserting and deleting can be obtained: determine the segement through the hash of key (expand if the segment size reaches the expansion threshold during insertion)-> determine the subscript of the linked list array HashEntry (get the link header when inserting / deleting)-> traverse the linked list [query: call equals () to compare, find the node equal to the found key and read it. Insert: if you find the same key node, update the value, if not, insert a new node; delete: find the deleted node, start to establish a new linked list with the next node of the deleted node, then copy and insert the original link header until the previous node of the deleted node, and finally set the new link header to the current array subscript element to replace the old linked list.
JDK1. ConcurrentHashMap in 8ConcurrentHashMapJDK1.8 makes a lot of optimizations on JDK1.7: 1. Cancel the segments field, directly use transient volatile HashEntry [] table to save data, and use table array elements as locks, so as to lock each row of data, and further reduce the probability of concurrent conflicts by further reducing the lock granularity. The original data structure of table array + linked list is changed to the structure of table array + linked list + red-black tree. For hash tables, the core capability is to distribute the key hash evenly in the array. If the hash after hash is uniform, the length of each queue in the table array is mainly 0 or 1. But the actual situation is not always so ideal. Although the default load factor of the ConcurrentHashMap class is 0.75, there are still some cases where the queue length is too long when the amount of data is too large or bad luck. If the one-way list is still used, then the time complexity of querying a node is O (n). Therefore, for lists with more than 8 (default value), the red-black tree structure is adopted in jdk1.8, so the time complexity of the query can be reduced to O (logN), and the performance can be improved by 3. 5%. Add a new field transient volatile CounterCell [] counterCells, you can easily calculate the number of all elements in the collection, and the performance is much better than size () in jdk1.7. I believe you have a deeper understanding of "how to understand Java concurrent queues and containers". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.