Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The implementation principle of Java Thread Pool and its practice in Meituan Business

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

With the rapid development of the computer industry, Moore's Law is gradually invalid, and multicore CPU has become the mainstream. The use of multi-threaded parallel computing has gradually become a basic weapon for developers to improve server performance. The thread pool ThreadPoolExecutor class provided by J.U.C helps developers manage threads and easily execute parallel tasks. Understanding and rational use of thread pools is a basic skill that developers must learn.

This paper begins with a brief description of the concept and use of thread pool, then combines the source code of thread pool to help readers understand the design idea of thread pool, and finally returns to practice to describe the problems encountered in using thread pool through a case. and a dynamic thread pool solution is given.

First, write in front

1.1 what is a thread pool

Thread pool (Thread Pool) is a tool for managing threads based on pooling, which often appears in multithreaded servers, such as MySQL.

Too many threads will bring additional overhead, including the overhead of creating and destroying threads, scheduling threads, and so on, as well as reducing the overall performance of the computer. The thread pool maintains multiple threads, waiting for supervisors to assign tasks that can be executed concurrently. This approach, on the one hand, avoids the cost of creating and destroying threads when processing tasks, on the other hand, it avoids the excessive scheduling problem caused by the expansion of the number of threads, and ensures the full utilization of the kernel.

This article describes thread pooling as a ThreadPoolExecutor class provided in JDK.

Of course, using thread pools can bring a number of benefits:

Reduce resource consumption: reuse created threads through pooling technology to reduce the loss caused by thread creation and destruction.

Improve response time: when a task arrives, it can be executed without waiting for the thread to be created.

Improve the manageability of threads: threads are scarce resources. if they are created without restriction, it will not only consume system resources, but also lead to resource scheduling imbalance and reduce the stability of the system because of the unreasonable distribution of threads. Thread pools can be used for unified allocation, tuning, and monitoring.

Provide more and more powerful features: thread pools are extensible, allowing developers to add more functionality to them. For example, the delayed timed thread pool ScheduledThreadPoolExecutor allows tasks to be deferred or executed on a regular basis.

1.2 what is the problem solved by the thread pool

The core problem solved by thread pool is resource management. In the concurrent environment, the system can not determine how many tasks need to be executed and how many resources need to be invested at any time. This uncertainty will lead to the following problems:

Frequent application / destruction of resources and scheduling of resources will result in additional consumption, which can be very significant.

The lack of means to restrain the unlimited application of resources can easily lead to the risk of exhaustion of system resources.

The system can not reasonably manage the internal distribution of resources, which will reduce the stability of the system.

In order to solve the problem of resource allocation, the thread pool adopts the idea of "Pooling". Pooling, as its name implies, is an idea of unifying the management of resources in order to maximize benefits and minimize risks.

Pooling is the grouping together of resources (assets, equipment, personnel, effort, etc.) For the purposes of maximizing advantage or minimizing risk to the users. The term is used in finance, computing and equipment management.--wikipedia

The idea of "pooling" can be applied not only in the computer field, but also in the fields of finance, equipment, personnel management, work management and so on.

The performance in the computer field is as follows: unified management of IT resources, including server, storage, network resources and so on. By sharing resources, users can benefit from low investment. Apart from thread pools, other typical usage strategies include:

Memory pool (Memory Pooling): apply for memory in advance, improve the speed of applied memory, and reduce memory fragmentation.

Connection pool (Connection Pooling): apply for database connections in advance to improve the speed of applying for connections and reduce the overhead of the system.

Instance pool (Object Pooling): recycle objects to reduce the expensive consumption of resources during initialization and release.

After understanding "what" and "why", let's delve into the internal implementation of the thread pool.

Second, the design and implementation of thread pool core

In the previous article, we learned that thread pool is a tool that helps us manage threads and obtain concurrency through the idea of "pooling", which is reflected in the ThreadPoolExecutor class in Java. So what is its detailed design and implementation? We will introduce it in detail in this chapter.

2.1 overall design

The core implementation class of thread pool in Java is ThreadPoolExecutor. This chapter analyzes the core design and implementation of Java thread pool based on the source code of JDK 1.8. Let's first take a look at ThreadPoolExecutor's UML class diagram to understand the inheritance relationship of ThreadPoolExecutor.

Figure 1 ThreadPoolExecutor UML class diagram

The top-level interface of ThreadPoolExecutor implementation is Executor, and the top-level interface Executor provides an idea: decoupling task submission from task execution. Users do not need to pay attention to how to create threads and how to schedule threads to execute tasks. Users only need to provide Runnable objects, submit the running logic of tasks to the Executor, and the Executor framework completes the deployment of threads and the execution of tasks. The ExecutorService interface adds some capabilities: (1) expands the ability to execute tasks, supplements methods that can generate Future for one or a batch of asynchronous tasks, and (2) provides methods to control thread pools, such as stopping thread pools.

AbstractExecutorService is an abstract class in the upper layer, which concatenates the processes of executing tasks and ensures that the implementation of the lower layer only needs to pay attention to one method of executing the task. The lowest implementation class ThreadPoolExecutor implements the most complex running part, ThreadPoolExecutor will maintain its own life cycle on the one hand, and manage threads and tasks at the same time, so as to make a good combination of the two to execute parallel tasks.

How does ThreadPoolExecutor run, and how does it maintain threads and execute tasks at the same time? Its operating mechanism is shown in the following figure:

Figure 2 ThreadPoolExecutor running process

The thread pool actually constructs a producer-consumer model internally, which decouples the thread and the task and is not directly related to each other, so that the task is well buffered and the thread is reused. The operation of thread pool is mainly divided into two parts: task management and thread management. The task management part acts as a producer, and when the task is submitted, the thread pool will judge the subsequent flow of the task: (1) directly apply for the thread to execute the task; (2) buffer it in the queue to wait for the thread to execute; (3) reject the task. The thread management part is the consumer, which is maintained uniformly in the thread pool, allocates threads according to task requests, and when the thread finishes executing the task, it will continue to obtain new tasks to execute. Finally, when the thread cannot get the task, the thread will be reclaimed.

Next, we will explain the thread pool running mechanism in detail in the following three parts:

How the thread pool maintains its own state.

How the thread pool manages tasks.

How the thread pool manages threads.

2.2 Lifecycle Management

The running state of the thread pool is not explicitly set by the user, but is maintained internally along with the running of the thread pool. A variable is used inside the thread pool to maintain two values: the running state (runState) and the number of threads (workerCount). In the implementation, the thread pool puts together the maintenance of two key parameters, running state (runState) and number of threads (workerCount), as shown in the following code:

Private final AtomicInteger ctl = new AtomicInteger (ctlOf (RUNNING, 0))

Ctl, an AtomicInteger type, is a field that controls the running status of the thread pool and the number of valid threads in the thread pool. It contains two parts of information: the running status of the thread pool (runState) and the number of valid threads in the thread pool (workerCount). The high 3 bits save the runState and the low 29 bits save the workerCount. The two variables do not interfere with each other. Using one variable to store two values can avoid inconsistencies in making relevant decisions and do not have to take up lock resources in order to maintain the consistency between the two. By reading the thread pool source code, you can also find that it is often necessary to judge the running state of the thread pool and the number of threads at the same time. The thread pool also provides several ways for users to obtain the current running state of the thread pool and the number of threads. Bit operations are used here, and they are much faster than basic operations.

The calculation method for getting the life cycle state and the number of threads in the thread pool for internal encapsulation is shown in the following code:

Private static int runStateOf (int c) {return c & ~ CAPACITY;} / / calculate the current running state

Private static int workerCountOf (int c) {return c & CAPACITY;} / / calculate the current number of threads

Private static int ctlOf (int rs, int wc) {return rs | wc;} / / generate ctl by status and number of threads

There are five operating states of ThreadPoolExecutor, which are:

Its lifecycle transformation is shown below:

Figure 3 Thread pool life cycle

2.3 Task execution mechanism

2.3.1 Task scheduling

Task scheduling is the main entry of the thread pool, when the user submits a task, how the next task will be executed is determined by this stage. Understanding this part is equivalent to understanding the core operating mechanism of the thread pool.

First of all, the scheduling of all tasks is completed by the execute method, and this part of the work is to check the running status of the current thread pool, the number of running threads, and the running strategy, and decide whether the next process to be executed is to apply for thread execution directly, or to buffer it to the queue, or to reject the task directly. The implementation process is as follows:

First of all, the running status of the thread pool is detected, and if it is not RUNNING, it is rejected directly. The thread pool should ensure that the task is executed in the state of RUNNING.

If workerCount

< corePoolSize,则创建并启动一个线程来执行新提交的任务。   如果workerCount >

= corePoolSize, and the blocking queue in the thread pool is not full, the task is added to the blocking queue.

If workerCount > = corePoolSize & & workerCount

< maximumPoolSize,且线程池内的阻塞队列已满,则创建并启动一个线程来执行新提交的任务。   如果workerCount >

= maximumPoolSize, and the blocking queue in the thread pool is full, the task is processed according to the reject policy, and the default is to throw an exception directly.

The execution process is shown in the following figure:

Figure 4 Task scheduling process

2.3.2 Task buffering

The task buffer module is the core part of the thread pool that can manage tasks. The essence of thread pool is the management of tasks and threads, and the key idea to do this is to decouple the tasks and threads so that they are not directly related before the subsequent allocation work can be done. The thread pool is implemented in producer-consumer mode through a blocking queue. The blocking queue caches the task, and the worker thread gets the task from the blocking queue.

The blocking queue (BlockingQueue) is a queue that supports two additional operations. The two additional operations are: when the queue is empty, the thread that gets the element waits for the queue to become non-empty. When the queue is full, the thread of the storage element waits for the queue to be available. Blocking queues are often used in the scenarios of producers and consumers, where producers are threads that add elements to the queue, and consumers are threads that take elements from the queue. A blocking queue is a container in which producers store elements, while consumers only take elements from the container.

The following figure shows thread 1 adding elements to the blocking queue, while thread 2 removes elements from the blocking queue:

Figure 5 blocking queue

Different task access policies can be implemented by using different queues. Here, we can further introduce the members of the blocking queue:

2.3.3 Task request

As can be seen from the task allocation section above, there are two possibilities for task execution: one is that the task is executed directly by the newly created thread. The other is that the thread takes the task from the task queue and executes it, and the idle thread that finishes the task will once again apply for the task from the queue and execute it. The first situation occurs only when the thread is initially created, and the second is when the thread acquires the vast majority of the task.

The thread needs to continuously fetch the task execution from the task cache module to help the thread get the task from the blocking queue and realize the communication between the thread management module and the task management module. This part of the policy is implemented by the getTask method, and its execution process is shown in the following figure:

Figure 6 get the task flow chart

This part of getTask makes several judgments to control the number of threads so that they conform to the state of the thread pool. If the thread pool should not hold so many threads now, a null value is returned. The worker thread Worker will constantly receive new tasks for execution, and when the worker thread Worker cannot receive the task, it will start to be recycled.

2.3.4 Task rejection

The task rejection module is the protective part of the thread pool, and the thread pool has a maximum capacity. When the task cache queue of the thread pool is full and the number of threads in the thread pool reaches maximumPoolSize, it is necessary to reject the task and adopt the task rejection strategy to protect the thread pool.

A deny policy is an interface designed as follows:

Public interface RejectedExecutionHandler {

Void rejectedExecution (Runnable r, ThreadPoolExecutor executor)

}

You can customize the rejection policy by implementing this API, or you can choose the four existing rejection policies provided by JDK, which are characterized as follows:

2.4 Worker thread management

2.4.1 Worker thread

In order to master the state of the thread and maintain the life cycle of the thread, the worker thread Worker in the thread pool is designed. Let's take a look at some of its code:

Private final class Worker extends AbstractQueuedSynchronizer implements Runnable {

Threads held by final Thread thread;//Worker

The task initialized by Runnable firstTask;//, which can be null

}

Worker, the worker thread, implements the Runnable interface and holds a thread thread, an initialized task firstTask. Thread is a thread created through ThreadFactory when the constructor is called and can be used to execute a task; firstTask uses it to save the first task passed in, which can be either a null or a thread. If the value is non-empty, the thread executes the task immediately at startup, corresponding to the situation when the core thread is created; if the value is null, you need to create a thread to perform the task in the task list (workQueue), that is, the creation of the non-core thread.

The model of Worker performing tasks is shown in the following figure:

Figure 7 Worker execution task

The thread pool needs to manage the life cycle of the thread and needs to be recycled when the thread is not running for a long time. The thread pool uses a Hash table to hold the thread's reference, which controls the thread's life cycle through operations such as adding and removing references. What is important at this time is how to determine whether the thread is running or not.

Worker implements the exclusive lock function by inheriting AQS and using AQS. Instead of using the reentrant lock ReentrantLock, AQS is used to implement the non-reentrant feature to reflect the current execution state of the thread.

Once the lock method acquires an exclusive lock, it indicates that the current thread is executing a task.

If you are performing a task, you should not interrupt the thread.

If the thread is not in an exclusive lock state, that is, an idle state, it is not processing a task, and the thread can be interrupted.

The thread pool calls the interruptIdleWorkers method to interrupt the idle thread when executing the shutdown method or the tryTerminate method, and the interruptIdleWorkers method uses the tryLock method to determine whether the thread in the thread pool is idle; if the thread is idle, it can be safely recycled.

This feature is used in the thread recycling process, which is shown in the following figure:

Figure 8 Thread pool recovery process

2.4.2 increase in Worker threads

The thread is added through the addWorker method in the thread pool, the function of this method is to add a thread, this method does not consider at which stage the thread pool is added, the strategy of allocating the thread is completed in the previous step, this step only completes the addition of the thread, makes it run, and finally returns whether it is successful or not. The addWorker method has two parameters: firstTask and core. The parameter firstTask is used to specify the first task executed by the newly added thread, which can be empty. The parameter core is true, which means that the number of currently active threads is less than that of corePoolSize,false when adding new threads. It means that the number of currently active threads is less than maximumPoolSize before adding new threads. The execution process is as follows:

Figure 9 Application thread execution flowchart

2.4.3 Worker thread recycling

The destruction of threads in the thread pool depends on the automatic recycling of JVM. The job of the thread pool is to maintain a certain number of thread references according to the status of the current thread pool to prevent these threads from being recycled by JVM. When the thread pool decides which threads need to be recycled, you only need to eliminate their references. After the Worker is created, it will constantly poll, and then get the task to execute. The core thread can wait for the task indefinitely, while the non-core thread needs a limited time to get the task. When Worker cannot get the task, that is, when the acquired task is empty, the loop ends and Worker actively removes references from its own thread pool.

Try {

While (task! = null | | (task = getTask ())! = null) {

/ / perform the task

}

} finally {

ProcessWorkerExit (w, completedAbruptly); / / take the initiative to reclaim yourself when you cannot get the task

}

Thread recycling is done in the processWorkerExit method.

Figure 10 Thread destruction process

In fact, in this method, moving the thread reference out of the thread pool has already ended the thread destruction. However, because there are many possibilities to cause thread destruction, the thread pool needs to determine what caused the destruction, whether to change the current state of the thread pool, and whether to reassign threads according to the new state.

2.4.4 Worker thread executes a task

The run method in the Worker class calls the runWorker method to execute the task, and the runWorker method executes as follows:

The while loop constantly fetches the task through the getTask () method.

The getTask () method fetches the task from the blocking queue.

If the thread pool is stopping, make sure that the current thread is in the interrupted state, otherwise the current thread is not in the interrupted state.

Carry out the mission.

If the getTask result is null, then jump out of the loop, execute the processWorkerExit () method, and destroy the thread.

The execution process is shown in the following figure:

Figure 11. Task execution process

Third, the practice of thread pool in business

3.1 Business background

In today's Internet industry, in order to maximize the multi-core performance of CPU, the ability of parallel computing is indispensable. Getting concurrency through thread pool management threads is a very basic operation, so let's look at two typical scenarios where thread pools are used to obtain concurrency.

Scenario 1: quick response to user requests

Description: a real-time request initiated by a user, and the service pursues response time. For example, if the user wants to view the information of a product, then we need to aggregate a series of information of the product dimension, such as the price, discount, inventory, pictures and so on, and show it to the user.

Analysis: from the perspective of user experience, the sooner this result responds, the better. If a page cannot be brushed out for half a day, the user may give up viewing the product. However, user-oriented function aggregation is usually very complex. with the cascading and multi-level cascading between calls and calls, business developers often choose to use thread pools as a simple way to package calls into parallel execution of tasks. shorten the overall response time. In addition, there are considerations for using thread pools. the most important thing in this scenario is to get the maximum response speed to satisfy users, so you should not set queues to buffer concurrent tasks, but increase corePoolSize and maxPoolSize to create as many threads as possible to execute tasks quickly.

Figure 12 parallel execution of tasks to improve task response speed

Scenario 2: fast processing of batch tasks

Description: a large number of offline computing tasks that need to be performed quickly. For example, to count a certain report, we need to calculate which goods in various stores in the country have certain attributes for the analysis of subsequent marketing strategies, then we need to query all the goods in all stores in the country and record the goods with certain attributes. Then quickly generate the report.

Analysis: this scenario requires a large number of tasks to be performed, and we want the tasks to be executed as soon as possible. In this case, multithreading strategy should also be used for parallel computing. However, different from the scenarios with priority to response speed, such scenarios do not need to be completed instantly, but focus on how to use limited resources to handle as many tasks as possible per unit time, that is, the problem of throughput priority. Therefore, queues should be set to buffer concurrent tasks, and the appropriate corePoolSize should be adjusted to set the number of threads that process tasks. Here, setting too many threads may also cause the problem of frequent thread context switching, slow down the processing speed of tasks and reduce throughput.

Figure 13 parallel execution of tasks to improve the execution speed of batch tasks

3.2 practical problems and schemes

The core problem with thread pool usage is that the parameters of thread pool are not easy to configure. On the one hand, the running mechanism of thread pool is not well understood, and reasonable configuration needs to rely heavily on the personal experience and knowledge of developers; on the other hand, the execution of thread pool is closely related to task types, and there is a great difference between IO-intensive and CPU-intensive tasks, which leads to the lack of some mature experience strategies in the industry to help developers refer.

There are many records within the company about the failures caused by unreasonable thread pool configuration. Here are some examples:

Case1:2018 year XX page display API is degraded by a large number of calls.

Accident description: XX page display API generated a large number of calls degraded, ranging from tens to hundreds of orders of magnitude.

Cause of accident: the service shows that the internal logic of the interface uses thread pool for parallel computing. Due to the failure to estimate the called traffic, the maximum number of cores is set to be too small, a large number of RejectedExecutionException is thrown, and the degradation condition of the interface is triggered, as shown below:

Fig. 14 the number of threads core setting is too small to cause RejectExecutionException

Case2:2018 XX business service is not available at level S2 failure.

Accident description: the execution time of the service provided by the XX business is too long, as the upstream service as a whole timed out, and a large number of downstream service calls failed.

Cause of the accident: the internal logic of the service processing request uses thread pool for resource isolation. When the queue setting is too long and the maximum number of threads is invalid, resulting in an increase in the number of requests, a large number of tasks are piled up in the queue and the task execution time is too long. Finally, a large number of calls to downstream services failed in timeout. The schematic diagram is as follows:

Figure 15 too long thread pool queue length setting and too small corePoolSize setting result in slow task execution

Thread pools are used in the business, and improper use can lead to failures, so how can we make better use of thread pools? To solve this problem, let's extend it in the following directions:

1. Is it possible not to use thread pools?

Going back to the original question, the business uses thread pools to obtain concurrency. Is there any other alternative to obtaining concurrency? We tried to investigate some other options:

Taken together, these new schemes can improve the performance of parallel tasks in some cases, but the key problem to be solved this time is how to obtain the concurrency more easily and safely. In addition, the application of the Actor model is actually very few, only widely used in Scala, and the collaborative framework maintained in Java is not mature. None of these three are easy to use at this stage, nor can they solve the problems at this stage of the business.

two。 Pursue the rationality of parameter setting?

Is there a formula that allows developers to easily figure out what parameters a thread pool in a scenario should be?

With this in mind, we have investigated some thread pool parameter configuration schemes in the industry:

After investigating the above industry schemes, we have not come up with a general thread pool calculation method. The execution of concurrent tasks is related to the task type. There is a great difference between IO-intensive and CPU-intensive tasks, but this proportion is difficult to be estimated reasonably, which makes it difficult to have a simple and effective general formula to help us calculate the results directly.

3. Thread pool parameters are dynamic?

Although after careful evaluation, it is still impossible to guarantee that the appropriate parameters can be calculated at one time, can we reduce the cost of modifying thread pool parameters? this can at least be quickly adjusted in the event of a failure to shorten the recovery time? Based on this consideration, can we migrate the thread pool parameters from the code to the distributed configuration center, so that the thread pool parameters can be configured dynamically and take effect immediately? the parameter modification process before and after dynamic thread pool parameters is compared as follows:

Figure 16 comparison between old and new processes for dynamically modifying thread pool parameters

Based on the comparison of the above three directions, we can see that the dynamic direction of parameters is simple and effective.

3.3 dynamic thread pool

3.3.1 overall design

The core design of a dynamic thread pool includes the following three aspects:

Simplify thread pool configuration: there are 8 thread pool construction parameters, but the core ones are 3: corePoolSize and maximumPoolSize,workQueue, which determine the task allocation and thread allocation strategy of thread pool to the greatest extent. Considering that in practical applications, there are mainly two scenarios for us to obtain concurrency: (1) execute subtasks in parallel to improve response speed. In this case, synchronous queues should be used, no tasks should be cached, but should be executed immediately. (2) perform a large number of tasks in parallel to improve throughput. In this case, bounded queues should be used, queues should be used to buffer large quantities of tasks, and the queue capacity must be declared to prevent unlimited accumulation of tasks. So the thread pool only needs to provide the configuration of these three key parameters, and provide the choice of two queues, can meet the vast majority of business needs, Less is More.

The parameters can be modified dynamically: in order to solve the problems such as the mismatch of parameters and the high cost of modifying parameters. On the basis of high scalability of Java thread pool, the thread pool is encapsulated, which allows the thread pool to listen to messages outside the synchronization and modify the configuration according to the messages. Put the thread pool configuration on the platform side, allowing developers to simply view and modify the thread pool configuration.

Add thread pool monitoring: to observe the lack of state of something, there is no way to improve it. Add the ability to monitor the life cycle of tasks executed by the thread pool to help developers understand the status of the thread pool.

Fig. 17 overall design of dynamic thread pool

3.3.2 functional architecture

Dynamic thread pools provide the following functions:

Dynamic parameter adjustment: supports thread pool parameter dynamic adjustment and interface operation, including modifying thread pool core size, maximum core size, queue length, etc., and takes effect in time after parameter modification.

Task monitoring: support Transaction monitoring of application granularity, thread pool granularity and task granularity; you can see the task execution status of thread pool, maximum task execution time, aPCge task execution time, 95max 99 line, and so on.

Load alarm: when the thread pool queue task backlog reaches a certain value, the application developer will be notified through the elephant (Meituan's internal communication tool); when the thread pool load reaches a certain threshold, the application developer will be notified by the elephant.

Operational monitoring: creating / modifying and deleting thread pools will notify the developer of the application.

Operation log: you can view the modification record of the thread pool parameter, who modified the thread pool parameter when, and what the parameter value was before the modification.

Permission check: only the person in charge of the application developer can modify the thread pool parameters of the application.

Figure 18 dynamic thread pool functional architecture

Parameter dynamic

The JDK native thread pool ThreadPoolExecutor provides the following setter methods for public, as shown in the following figure:

Figure 19 JDK thread pool parameter setting interface

JDK allows thread pool users to dynamically set the core policy of thread pool through the instance of ThreadPoolExecutor. Taking setCorePoolSize as an example, after the thread pool calls this method to set corePoolSize at run time, the thread pool will directly overwrite the original corePoolSize value and adopt different processing strategies based on the comparison between the current value and the original value. If the current value is less than the number of current work threads, it means that there are extra worker threads, and an interrupt request will be sent to the worker thread of the current idle to achieve recovery, and the excess worker will also be reclaimed the next time idle. For the current value greater than the original value and the task to be executed in the current queue, the thread pool will create a new worker thread to execute the queue task. The specific setCorePoolSize process is as follows:

Figure 20 setCorePoolSize method execution flow

The internal thread pool will deal with the current state to modify smoothly, and several other methods are limited to space, which will not be introduced here. The point is that based on these public methods, we only need to maintain the instance of ThreadPoolExecutor and get the instance to modify its parameters when it needs to be modified. Based on the above ideas, we have realized the dynamic thread pool parameters, which can be configured and modified in the management platform. The effect figure is shown below:

Figure 21 dynamically modifies thread pool parameters

Users can find the specified thread pool through the name of the thread pool on the management platform, and then modify its parameters, which will take effect in real time after saving. Currently supported dynamic parameters include the number of cores, maximum value, queue length and so on. In addition, in the interface, we can also see whether the user can configure whether to enable alarm, queue waiting task alarm threshold, activity alarm and so on. With regard to monitoring and alarm, we will introduce alignment in the next section.

Thread pool monitoring

In addition to parameter dynamics, in order to make better use of the thread pool, we need to be aware of the health of the thread pool, such as what is the current load of the thread pool? Are there enough resources allocated? What is the implementation of the mission? Is it a long task or a short task?

Based on the consideration of these problems, dynamic thread pool provides multi-dimensional monitoring and alarm capabilities, including thread pool activity, task execution Transaction (frequency, time consuming), Reject exceptions, thread pool internal statistics, and so on, which can not only help users analyze thread pool usage from multiple dimensions, but also notify users when problems arise, so as to avoid failure or accelerate fault recovery.

1. Load monitoring and alarm

The core issue of thread pool load is whether there are enough resources allocated based on the current thread pool parameters. With regard to this issue, we can look at it from both pre-and mid-point of view. In advance, thread pool defines the concept of "activity" to make users aware of thread pool load problems before a Reject exception occurs. Thread pool activity is calculated as follows: thread pool activity = activeCount/maximumPoolSize. This formula represents a higher thread load when the number of active threads tends to maximumPoolSize.

In the event, you can also look at the thread pool overload criteria from two aspects: one is the occurrence of a Reject exception, and the other is that there are waiting tasks in the queue (custom thresholds are supported). When both of the above situations occur, the alarm will be triggered, and the alarm message will be pushed to the person in charge associated with the service through the elephant.

Figure 22 Elephant alarm notice

two。 Task-level fine monitoring

In traditional thread pool application scenarios, the execution of tasks in the thread pool is transparent to the user. For example, in a specific business scenario, the business developer applies for a thread pool to perform two tasks at the same time, one is to send messages and the other is to send text messages. The actual frequency and duration of these two types of tasks do not have an intuitive feeling for users. It is very possible that these two types of tasks are not suitable to share a thread pool, but they cannot be optimized because users cannot perceive them. Task-level burying points are implemented inside the dynamic thread pool, and names with business meaning are allowed to be specified for different business tasks. Transaction management is done within the thread pool based on this name. Based on this feature, users can see the execution of the task level within the thread pool and distinguish between businesses. The task monitoring diagram is shown below:

Figure 23 Thread pool task execution monitoring

3. Real-time view of run-time status

Based on the getter methods of several public provided by JDK native thread pool ThreadPoolExecutor, users can read the running status and parameters of the current thread pool, as shown in the following figure:

Figure 24 Real-time operation of thread pool

The dynamic thread pool encapsulates the function of real-time viewing the runtime status based on these interfaces, based on which users can know the real-time status of the thread pool, such as how many worker threads are currently, how many tasks have been executed, the number of tasks waiting in the queue, and so on. The effect is shown in the following figure:

Figure 25 Real-time operation of thread pool

3.4 Summary of practice

In the face of the practical problems encountered in the use of thread pools in business, we have returned to supporting the concurrency problem itself to consider whether there is a solution to replace thread pools, and we have also tried to pursue the rationality of thread pool parameter settings. however, in the face of the complexity and maintainability of the specific landing of the industry scheme and the uncertainty of the real running environment, we are "difficult" in the first two directions.

Finally, we go back to the dynamic direction of thread pool parameters and come up with a solution that can solve business problems. Although in essence, we still do not escape from the scope of using thread pool, but there is a good balance between costs and benefits. The cost lies in the realization of dynamic and the low cost of monitoring, and the benefit lies in: on the basis of not subverting the use of the original thread pool, the probability of failure is reduced from two aspects: reducing the cost of thread pool parameter modification and multi-dimensional monitoring. I hope the idea of dynamic thread pool provided in this article can be helpful to everyone.

IV. Reference materials

[1] JDK 1.8 source code

[2] Wikipedia-thread pool

[3] better use of Java thread pool

[4] Wikipedia Pooling (Resource Management)

[5] Deep understanding of Java thread pool: ThreadPoolExecutor

[6] "Java concurrent programming practice"

V. A brief introduction to the author

Zhiyuan joined Meituan Dianping and Meituan in 2018 as a background development engineer in the comprehensive R & D center of the store.

Lu Chen joined Meituan Dianping and Meituan as backstage technical experts in the comprehensive R & D center in 2015.

Shenyang passenger flow price http://mobile.syjk120.cn/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report