Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Illustration | you call this a thread pool?

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

This article comes from Weixin Official Accounts: Low Concurrent Programming (ID: dipingfa), original title: Illustration| You call this shit a thread pool? Author: Flash

Xiaoyu: Flash guest, I recently saw thread pool, was confused by the parameters of seven or eight slots, can you tell me about it?

Flash guest: no problem, I am good at this, let's start with the simplest case, suppose there is a piece of code, you want to execute it asynchronously, do you want to write such code?

new Thread(r).start(); Xiao Yu: Well, the simplest way to write it seems to be like this.

Flash guest: this kind of writing certainly can complete the function, but you write like this, Lao Wang writes like this, Lao Zhang also writes like this, the program is full of such methods to create threads, can you write a unified tool class for everyone to call?

Xiaoyu: Yes, it feels like there is a unified tool class, which is more elegant.

Flash guest: so if you were to design this tool class, how would you write it? I'll give you an interface first, and you'll implement it.

public interface Executor { public void execute(Runnable r);} Xiaoyu: emmm, I may define several member variables first, such as the number of core threads, the maximum number of threads... Anyway, it's just those random parameters.

Stop! STOP! Xiaoyu, you are now deeply poisoned by the interview manual, you first forget all these concepts, say let you write the simplest tool class, the first reaction, how will you write?

First edition: I might do that.

//new thread: directly create a new thread to run class FlashExecutor implements Executor { public void execute(Runnable r) { new Thread(r).start(); {} Flash Guest: Well, well, your idea is very good.

Xiao Yu: Ah, I this will not be too low ah, I thought you will scold me.

Doug Lea gives examples of this in the JDK source comments, which is the most fundamental feature. You try to optimize it on this basis?

Second edition: How can I optimize it? This has not been implemented with a tool class asynchronous execution of the right!

Flash: Let me ask you a question. If 10000 people call this tool class to submit tasks, it will create 10000 threads to execute. This is definitely not appropriate! Can you control the number of threads?

Xiaoyu: This is not difficult, I can throw this task r into a tasks queue, and then start only one thread, call it Worker thread, constantly take tasks from the tasks queue, execute tasks. This way, no matter how many times the caller calls, there will always be only one Worker thread running, like this.

Flash: Great, this design has three major implications:

1. Control the number of threads.

2. The queue not only acts as a buffer, but also decouples task submission from execution.

3. Most importantly, it eliminates the overhead of creating and destroying threads repeatedly each time.

Xiao Yu: Wow really, such a small change has so much meaning.

Flash: Of course, but isn't it a little bit less to have only one worker thread in the background? And what if the task queue is full?

3rd edition Xiaoyu: Oh, yes, only one thread is very laborious in some scenarios, so I increase the number of Worker threads?

Flash guest: Yes, the number of Worker threads should be increased, but the specific number should be decided by the user. When it is called, it is called the core thread number corePoolSize.

O.K.: Okay, so I designed it this way.

1. When initializing the thread pool, start corePoolSize directly. Worker runs first.

2. These workers are endless loops that take tasks from the queue and execute them.

3. The execute method still puts the task directly into the queue, but discards it directly when the queue is full.

Flash guest: too perfect, reward you a ferrero bar.

Xiao Yu: Haha thank you, then I eat for a while ha.

Flash guest: OK, you eat while I say. Now that we have implemented a thread pool that is at least not ugly, there are a few minor flaws, such as creating a bunch of Worker threads running around empty at initialization time, which is a bit wasteful if no asynchronous tasks are submitted for execution at this time.

Xiao Yu: Oh, it seems so!

Flash guest: also, you this queue is full, directly discard the new task, this is a bit rude, can you let the caller decide how to deal with it?

Xiaoyu: Oh, I didn't expect such a gentle girl to write such a rough code.

Flash guest: Well, you swallow Ferrero first.

Fourth edition xiaoyu: I finished eating, and now the brain is not enough, have to digest food first, or you help me analyze it.

Flash guest: OK, now we make the following improvements.

1. Create Worker on Demand: When initializing the thread pool, instead of immediately creating corePoolSize worker threads, wait for the caller to continuously submit tasks, gradually create worker threads, and stop when the number reaches corePoolSize, and throw the tasks directly into the queue. Then you have to use an attribute to record the number of worker threads that have been created, called workCount.

2. Add rejection policy: The implementation is to add an input parameter, the type is an interface RejectedExecutionHandler, and the caller decides the implementation class so that the rejectedExecution method can be executed after the task submission fails.

3. Add Thread Factory: The implementation is to add an input parameter, the type is an interface ThreadFactory, when adding a worker thread, no longer directly new thread, but call the newThread method of the ThreadFactory implementation class passed by the caller.

It looks like this.

Xiao Yu: Wow, you are still amazing. This edition should be perfect, right?

Flash guest: No, no, no. It's still far from perfect. You can think about the next improvement. I can give you a hint here.

elastic mind

Fifth edition: flexible thinking? Haha, flash, your terminology is getting less and less human.

Flash guest: Cough cough

Xiaoyu: Oh, I mean you must mean that I wrote this code without flexibility, right? But what does elasticity mean?

Flash guest: Simply put, in this scenario, flexibility is in the task submitted more frequently, and the task submitted very infrequently these two cases, do you have a problem with this code?

Xiaoyu: emmm Let me think, my thread pool, when the amount of submitted tasks suddenly increases, the worker threads and queues are full, and we can only take the rejection strategy, which is actually discarded.

Flash: Yes

Xiaoyu: This is indeed too hard, but I think the caller can solve this problem by setting a large core thread number, corePoolSize.

Flash guest: Yes, but the peak period of QPS in general scenarios is very short, and for this very short peak, setting a large number of core threads is simply a waste of resources. Don't you feel dizzy when you see the above picture?

Xiao Yu: Yes, what should I do? Too big won't do, too small won't do.

Flash: We can invent a new property called maximumPoolSize. When the core thread count and queue are full, new tasks can still be submitted by creating new worker threads (called non-core threads) until the worker thread count reaches maximumPoolSize, which can ease the temporary peak without the user setting an excessive core thread count.

Xiaoyu: Oh, I feel a little bit, but how do I operate it?

Flash guest: imagination is not good ah xiaoyu, then you see the following demonstration.

1. At the beginning, as in the previous version, when workCount

< corePoolSize 时,通过创建新的 Worker 来执行任务。 2. 当 workCount >

= corePoolSize stops creating new threads and drops tasks directly into the queue.

3. However, when the queue is full and workCount < maximumPoolSize, the rejection policy is no longer followed directly, but non-core threads are created until workCount = maximumPoolSize, and then the rejection policy is followed.

Xiao Yu: Oh, I didn't think that corePoolSize is responsible for the number of worker threads required in most cases, while maximumPoolSize is responsible for temporarily expanding the number of worker threads during peak periods.

Flash guest: Yes, the elasticity of peak period is done, so naturally we have to consider the trough period. When there is no task submitted for a long time, both core threads and non-core threads have been running empty, wasting resources. We can set a keepAliveTime timeout for non-core threads. When we fail to get tasks from the queue for this long time, we will stop waiting and destroy the thread.

Xiaoyu: Well, this time our thread pool can temporarily expand at the peak of QPS, and we can recover threads (non-core threads) in time without wasting resources when QPS is low. It really seems to be very Q-bomb.

Flash guest: Yes, yes. Eh, no, why did it become I said, didn't you say you should think about this edition?

Xiaoyu: I also want to ah, but you talk about technology on the self-talk problem always do not change, I have what way.

Flash guest: Sorry, sorry, then you sum up our thread pool

summary

Xiaoyu: Well, first of all, its construction method is like this

public FlashExecutor( int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) { ... //Omit some parameter checks this.corePoolSize = corePoolSize; this.maximumPoolSize = maximumPoolSize; this.workQueue = workQueue; this.keepAliveTime = unit.toNanos(keepAliveTime); this.threadFactory = threadFactory; this.handler = handler;} These parameters are

int corePoolSize: Number of core threads

int maximumPoolSize: Maximum number of threads

long keepAliveTime: Idle time of non-core threads

TimeUnit: unit of idle time

BlockingQueue: Task Queue (thread-safe blocking queue)

ThreadFactory threadFactory

RejectedExecutionHandler: reject policy

The whole mission submission process is

Flash guest: not bad not bad, but this is your own summary yo, now also use me to tell you what is the thread pool?

Xiaoyu: Oh my God, I just found out that this seems to be the thread pool parameters and principles that I have been confused about!

Flash guest: Yes, and the construction method of the last version of the code is the longest construction method of ThreadPoolExecutor in Java interview, and the parameter names have not changed.

Xiaoyu: Wow, that's awesome! I forgot what I wanted to do in the beginning, hehe.

Flash guest: Haha, it's cool to learn technology unconsciously, right? It's almost dinner time. Would you like to go to Shanxi Noodle Restaurant together?

Xiaoyu: Oh, I don't like the color of the table in that shop. Maybe next time.

Flash: Oh, okay.

This article comes from Weixin Official Accounts: Low Concurrent Programming (ID: dipingfa), Author: Flash Guest

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report