In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
Editor to share with you the example analysis of blocking queues and thread pools in Java. I believe most people don't know much about it, so share this article for your reference. I hope you will learn a lot after reading this article. Let's learn about it together.
[1] blocking queue 1. What is a blocking queue?
① supports blocking insertion: this means that when the queue is full, the queue blocks the thread that inserts the element until the queue is dissatisfied.
② supports blocking removal: this means that when the queue is empty, the thread that gets the element waits for the queue to become non-empty.
The use of producer and consumer models in concurrent programming can solve most concurrency problems. This model improves the overall data processing speed of the program by balancing the working capacity of the production thread and the consumption thread.
In the threaded world, the producer is the thread that produces the data and the consumer is the thread that consumes the data. In multithreaded development, if the processing speed of the producer is fast and the processing speed of the consumer is slow, then the producer must wait for the consumer to finish processing before continuing to produce data. By the same token, if consumers have more processing power than producers, then consumers must wait for producers.
In order to solve this problem of uneven production and consumption capacity, there is a producer and consumer model. The producer and consumer model solves the problem of strong coupling between producers and consumers through a container. Producers and consumers do not communicate directly with each other, but communicate through blocking queues, so producers do not have to wait for consumers to process data after production, but throw it directly to blocking queues. Consumers do not ask producers for data, but take it directly from blocking queues. Blocking queues are equivalent to a buffer zone, balancing the processing power of producers and consumers.
Blocking queues are often used in the scenarios of producers and consumers, where producers are threads that add elements to the queue, and consumers are threads that fetch elements from the queue. Blocking queues are containers that producers use to store elements and consumers use to get elements.
Blocking queues are also common in Android development-- MessageQueue in the Handler mechanism is a priority blocking queue.
What is the use of blocking queues?
Decoupling decouples between producers and consumers.
Balancing the performance difference between the two balances the performance difference between producers and consumers.
Third, the simplicity and practicality of blocking queues
What are the common blocking queues in ①?
The common blocking queues are as follows:
ArrayBlockingQueue
LinkedBlockingQueue
PriorityBlockingQueue
DelayQueue
SynchronousQueue
LinkedTransferQueue
LinkedBlockingDeque
Several common ways to handle ② blocking queues (not all blocking)
Method\ handling method throws an exception returns special value always blocks timeout exits insertion method add (e) offer (e) put (e) offer (eje timestamp unit) removal method remove () poll () take () poll (time,unit) check method element () peek () is not available
Only the put and take methods are blocked
Throw an exception: when the queue is full, if you insert elements into the queue again, an IllegalStateException ("Queuefull") exception will be thrown. Getting an element from the queue throws a NoSuchElementException exception when the queue is empty.
-returns a special value: when an element is inserted into the queue, it will return whether the element has been inserted successfully, and true will be returned successfully. If it is a remove method, it takes an element from the queue and returns null if not.
Blocking: when the blocking queue is full, if the producer thread put the queue, the queue blocks the producer thread until the queue is available or exits in response to an interrupt. When the queue is empty, if the consumer thread take elements from the queue, the queue blocks the consumer thread until the queue is not empty.
Timeout exit: when the blocking queue is full, if the producer thread inserts elements into the queue, the queue will block the producer thread for a period of time, and if the specified time is exceeded, the producer thread will exit.
The ③ blocking queue is easy to use
Three threads add data
Three threads consume data
Public class MyBlockingQueue {static ArrayBlockingQueue abq = new ArrayBlockingQueue (3); public static void main (String [] args) {/ / producer thread for (int I = 0; I
< 3; i++) { new Thread(() ->Producer (), "producerThread" + I) .start ();} / / Consumer thread for (int I = 0; I)
< 3; i++) { new Thread(() ->Consumer (), "consumerThread" + I). Start ();}} private static void consumer () {while (true) {try {String msg = abq.take (); System.out.println (Thread.currentThread (). GetName () + "> receive msg:" + msg);} catch (InterruptedException e) {e.printStackTrace () } private static void producer () {for (int I = 0; I
< 100; i++) { try { abq.put("[" + i + "]"); System.out.println(Thread.currentThread().getName() + " ->Send msg: "+ I);} catch (InterruptedException e) {e.printStackTrace ();}
Execution result:
ProducerThread1-> send msg:0
ProducerThread2-> send msg:0
ProducerThread0-> send msg:0
ConsumerThread1-> receive msg: [0]
ProducerThread1-> send msg:1
ConsumerThread2-> receive msg: [0]
ProducerThread1-> send msg:2
ProducerThread2-> send msg:1
ConsumerThread1-> receive msg: [0]
ConsumerThread0-> receive msg: [1]
...
[2] Java thread pool 1. Why do we need Java thread pool? What are the benefits of using it?
① reduces resource consumption.
Reduce the consumption caused by thread creation and destruction by reusing created threads.
② improves response time.
Usually, the result of performing a task in a Java program is divided into the following steps:
1. Create thread-> 2. Perform tasks-- > 3. Destroy thread
When a task arrives, it can be executed without waiting for the thread to be created. Suppose a server takes time to complete a task: T1 thread creation time, T2 thread execution time, and T3 thread destruction time. If T1 + T3 is much greater than T2, thread pools can be used to improve server performance. Thread pool technology is a technology that focuses on how to shorten or adjust the time of T1Mague T3, so as to improve the performance of server programs. It arranges T1Magery T3 in the time period when the server program starts and ends or some idle time period, so that when the server program processes client requests, there will be no T1Magery T3 overhead.
③ improves the manageability of threads.
Threads are scarce resources. If created indefinitely, it will not only consume system resources, but also reduce the stability of the system. Thread pools can be used for unified allocation, tuning and monitoring.
2. Which thread pools are mainly provided in Java?
The following 4 thread pools are mainly provided in Java:
1. NewCachedThreadPool: used to create a thread pool that can be expanded infinitely. It is suitable for scenarios with light load to perform short-term asynchronous tasks. (it allows tasks to be executed quickly, because the task execution time is short, it can end quickly, and it does not cause cpu to overswitch.)
2. NewFixedThreadPool: create a fixed size thread pool. Because of the unbounded blocking queue, the actual number of threads will never change. It is suitable for heavy load scenarios to limit the current number of threads. (to ensure that the number of threads can be controlled, so that there will not be too many threads, resulting in more serious system load)
3. NewSingleThreadExecutor: create a single-threaded thread pool, which is suitable for tasks that need to be executed sequentially.
4. NewScheduledThreadPool: suitable for performing delayed or periodic tasks.
III. Inheritance of thread classes
Class relation of ThreadPoolExecutor
Executor is an interface that is the basis of the Executor framework, which separates the submission of tasks from the execution of tasks.
The ExecutorService interface inherits Executor and extends shutdown () and submit () on it, which can be said to be a real thread pool interface.
The AbstractExecutorService abstract class implements most of the methods in the ExecutorService interface
ThreadPoolExecutor is the core implementation class of the thread pool, which is used to perform submitted tasks.
ScheduledExecutorService interface inherits ExecutorService interface and provides ExecutorService with "periodic execution" function.
ScheduledThreadPoolExecutor is an implementation class that can run commands after a given delay, or execute commands periodically. ScheduledThreadPoolExecutor is more flexible and powerful than Timer.
Executor-- > ExecutorService-- > AbstractExecutorService-- > ThreadPoolExecutor
Several commonly used thread pools in the second are all derived from ThreadPoolExecutor, so let's analyze this class
4. The meaning of ThreadPoolExecutor parameter corePoolSize
CorePoolSize
The number of core threads in the thread pool, when a task is submitted, the thread pool creates a new thread to execute the task until the current number of threads is equal to corePoolSize;. If the current number of threads is corePoolSize, the tasks that continue to be submitted are saved to the blocking queue, waiting to be executed.
If the prestartAllCoreThreads () method of the thread pool is executed, the thread pool creates and starts all core threads ahead of time.
MaximumPoolSize
The maximum number of threads allowed in the thread pool. If the current blocking queue is full and the task continues to be submitted, a new thread is created to execute the task, provided that the current number of threads is less than maximumPoolSize
KeepAliveTime
The time it takes for a thread to survive when it is idle, that is, the time it takes for a thread to continue to survive when there is no task to execute. By default, this parameter is useful only if the number of threads is greater than corePoolSize
TimeUnit
The time unit of the keepAliveTime
WorkQueue
WorkQueue must be a BlockingQueue blocking queue. When the number of threads in the thread pool exceeds its corePoolSize, the thread enters the blocking queue and waits. With workQueue, the thread pool implements blocking.
In general, we should try to use bounded queues as much as possible, because using unbounded queues as work queues has the following impact on thread pools.
1) when the number of threads in the thread pool reaches corePoolSize, the new task will wait in an unbounded queue, so the number of threads in the thread pool will not exceed corePoolSize.
2) due to 1, maximumPoolSize will be an invalid parameter when using unbounded queues.
3) because of 1 and 2, keepAliveTime will be an invalid parameter when using unbounded queues.
4) more importantly, using unbounded queue may deplete system resources, bounded queues help prevent resource exhaustion, and even if bounded queues are used, try to keep the queue size within an appropriate range.
ThreadFactory
To create a thread factory, you can set a recognized thread name for each new thread through a custom thread factory, and you can also make more settings for the thread more freely, such as setting all threads as daemon threads.
The default threadFactory in the Executors static factory, the naming convention for threads is "pool- digits-thread- digits".
RejectedExecutionHandler
The saturation strategy of the thread pool, when the blocking queue is full and there are no idle worker threads, if you continue to submit the task, you must adopt a strategy to deal with the task. The thread pool provides four strategies:
(1) AbortPolicy: throw an exception directly. Default policy
(2) CallerRunsPolicy: use the caller's thread to execute the task
(3) DiscardOldestPolicy: discard the first task in the blocking queue and execute the current task
(4) DiscardPolicy: discard the task directly
Of course, you can also implement the RejectedExecutionHandler interface according to the application scenario and customize the saturation policy, such as logging or tasks that cannot be handled by persistent storage.
Fifth, thread pool workflow (mechanism)
1. If there are fewer threads currently running than corePoolSize, create a new thread to execute the task (note that performing this step requires a global lock).
two。 If the running thread is equal to or more than corePoolSize, the task is added to the BlockingQueue.
3. If the task cannot be added to the BlockingQueue (the queue is full), a new thread is created to process the task.
4. If creating a new thread will cause the currently running thread to exceed the maximumPoolSize, the task will be rejected and the RejectedExecutionHandler.rejectedExecution () method will be called.
VI. Comparison of the two submission methods
The execute () method is used to submit a task that does not require a return value, so it is not possible to determine whether the task was successfully executed by the thread pool.
The submit () method is used to submit tasks that require a return value. The thread pool returns an object of type future, which can be used to determine whether the task was executed successfully, and the return value can be obtained through the get () method of future. The get () method blocks the current thread until the task is completed, while using the get (long timeout,TimeUnit unit) method blocks the current thread for a period of time and then returns immediately, which may not finish the task.
These are all the contents of the article "sample Analysis of blocking queues and Thread pools in Java". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.