In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Detailed explanation of thread pool for nginx source code analysis
I. Preface
Nginx adopts multi-process model, and the communication between master and worker is mainly through pipe pipeline. The advantage of multi-process is that each process does not affect each other. But it is often asked why nginx does not adopt the multithreaded model (except for the case mentioned in the previous article, the only thing to ask the author is HAHA). In fact, a core thread_pool (thread pool) module is provided in the nginx code to handle multitasking. Below, I would like to share with you my understanding of the thread_pool module (please point out the mistakes and deficiencies in the article, thank you)
II. Introduction of thread_pool thread pool module
The main functions of nginx are composed of modules, and thread_pool is no exception. Thread pool is mainly used for IO operations such as reading and sending files to avoid slow IO affecting the normal operation of worker. First, quote an official configuration example.
Syntax: thread_pool name threads=number [max_queue=number]; Default: thread_pool default threads=32 max_queue=65536;Context: main
According to the above configuration instructions, thread_pool has a name, and the number of threads and queue size above refer to the number of threads in each worker process, not the total number of threads in all worker. All threads in a thread pool share a queue, and the maximum number of people in the queue is the max_queue defined above. If the queue is full, adding tasks to the queue will cause an error.
According to the module initialization process mentioned earlier (before master starts worker) create_conf-- > command_set function-> init_conf, let's take a look at the initialization of thread_pool module according to this process.
/ * nginx/src/core/ngx_thread_pool.c * / / the infrastructure static void * ngx_thread_pool_create_conf (ngx_cycle_t * cycle) {ngx_thread_pool_conf_t * tcf needed to create a thread pool / / apply for a piece of memory from the memory pool pointed to by cycle- > pool tcf = ngx_pcalloc (cycle- > pool, sizeof (ngx_thread_pool_conf_t)); if (tcf = = NULL) {return NULL } / / first apply for an array / / ngx_thread_pool_t structure containing four ngx_thread_pool_t pointer type elements to hold a thread pool related information if (ngx_array_init (& tcf- > pools, cycle- > pool, 4, sizeof (ngx_thread_pool_t *))! = NGX_OK) {return NULL;} return tcf } / / parse the configuration of thread_pool in the configuration file, and save the relevant information in ngx_thread_pool_t static char * ngx_thread_pool (ngx_conf_t * cf, ngx_command_t * cmd, void * conf) {ngx_str_t * value; ngx_uint_t I; ngx_thread_pool_t * tp; value = cf- > args- > elts / / use the name in the thread_pool configuration as the unique identity of the thread pool (only the first one is valid if the name is the same) / / apply for the ngx_thread_pool_t structure to save information about the thread pool / / thus, nginx supports the configuration of multiple thread pools with different name tp = ngx_thread_pool_add (cf, & value [1]);. / / process all elements of the thread_pool configuration line for (I = 2; I
< cf->Args- > nelts; iTunes +) {/ / check the number of configured threads if (ngx_strncmp (value [I]. Data, "threads=", 8) = = 0) {. } / / check the configured maximum queue length if (ngx_strncmp (value [I] .data, "max_queue=", 10) = = 0) {. }}.} / / determine whether the configuration of each thread pool in the array containing multiple thread pools is correct static char * ngx_thread_pool_init_conf (ngx_cycle_t * cycle, void * conf) {.... Ngx_thread_pool_t * * tpp; tpp = tcf- > pools.elts; / / traverses all thread pool configurations in the array and checks their correctness for (I = 0; I
< tcf->Pools.nelts; iTunes +) {. } return NGX_CONF_OK;}
After the above process is completed, nginx's master keeps a copy of the configuration of all thread pools (tcf- > pools), which is also inherited when the worker is created. The init_process function, if any, of each core module is then called in each worker.
/ * nginx/src/core/ngx_thread_pool.c * / / the infrastructure static ngx_int_tngx_thread_pool_init_worker (ngx_cycle_t * cycle) {ngx_uint_t I; ngx_thread_pool_t * * tpp required to create a thread pool Ngx_thread_pool_conf_t * tcf; / / thread pool if (ngx_process! = NGX_PROCESS_WORKER & & ngx_process! = NGX_PROCESS_SINGLE) {return NGX_OK;} / / initialize task queue ngx_thread_pool_queue_init (& ngx_thread_pool_done) if it is not worker or if there is only one thread pool; tpp = tcf- > pools.elts; for (I = 0; I
< tcf->Pools.nelts; iPool +) {/ / initialize each thread pool if (ngx_thread_pool_init (tpp [I], cycle- > log, cycle- > pool)! = NGX_OK) {return NGX_ERROR;}} return NGX_OK;} / / thread pool initialization static ngx_int_t ngx_thread_pool_init (ngx_thread_pool_t * tp, ngx_log_t * log, ngx_pool_t * pool) {. / initialize the task queue ngx_thread_pool_queue_init (& tp- > queue); / / create a thread lock if (ngx_thread_mutex_create (& tp- > mtx, log)! = NGX_OK) {return NGX_ERROR;} / / create a thread condition variable if (& tp- > cond, log)! = NGX_OK) {(void) ngx_thread_mutex_destroy (& tp- > mtx, log) Return NGX_ERROR;}. For (n = 0; n)
< tp->Threads; ngx_log_error +) {/ / create each thread in the thread pool err = pthread_create (& tid, & attr, ngx_thread_pool_cycle, tp); if (err) {ngx_log_error (NGX_LOG_ALERT, log, err, "pthread_create () failed"); return NGX_ERROR }.} / / Thread processing main function static void * ngx_thread_pool_cycle (void * data) {. For (;) {/ / acquire thread lock if (ngx_thread_mutex_lock (& tp- > mtx, tp- > log)! = NGX_OK) {return NULL;} / * the number may become negative * / tp- > waiting-- / / if the task queue is empty, cond_wait blocks and waits for cond_signal/broadcast to trigger while (tp- > queue.first = = NULL) {if (& tp- > cond, & tp- > mtx, tp- > log)! = NGX_OK) {(void) ngx_thread_mutex_unlock (& tp- > mtx, tp- > log); return NULL }} / / get task from the task queue and remove it from the queue task = tp- > queue.first; tp- > queue.first = task- > next; if (tp- > queue.first = = NULL) {tp- > queue.last = & tp- > queue.first;} if (ngx_thread_mutex_unlock (& tp- > mtx, tp- > log)! = NGX_OK) {return NULL;}. / / task's processing function task- > handler (task- > ctx, tp- > log);. Ngx_spinlock (& ngx_thread_pool_done_lock, 1, 2048); / / add preprocessed tasks to the done queue and wait for the callback function of event to continue processing * ngx_thread_pool_done.last = task; ngx_thread_pool_done.last = & task- > next / / prevent compiler optimization to ensure that the unlock operation is ngx_memory_barrier (); ngx_unlock (& ngx_thread_pool_done_lock); (void) ngx_notify (ngx_thread_pool_handler) after the above statement has been executed }} / / process each event event static void ngx_thread_pool_handler (ngx_event_t * ev) {. Ngx_spinlock (& ngx_thread_pool_done_lock, 1, 2048); / / get the header of the task linked list task = ngx_thread_pool_done.first; ngx_thread_pool_done.first = NULL; ngx_thread_pool_done.last = & ngx_thread_pool_done.first; ngx_memory_barrier (); ngx_unlock (& ngx_thread_pool_done_lock) While (task) {ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ev- > log, 0, "run completion handler for task #% ui", task- > id); / / traverse all task events in the queue event = & task- > event; task = task- > next; event- > complete = 1; event- > active = 0 / / call the handler function corresponding to event to deal with event- > handler (event);}}
Third, thread_ Pool thread pool use example
As mentioned earlier, the thread pool in nginx is mainly used for IO operations to manipulate files. So, you can see the use of thread pools in the module ngx_http_file_cache.c file that comes with nginx.
/ * nginx/src/os/unix/ngx_files.c * * / / the handler of file_cache module (involving thread pool) static ssize_t ngx_http_file_cache_aio_read (ngx_http_request_t * r) Ngx_http_cache_t * c) {. # if (NGX_THREADS) if (clcf- > aio = = NGX_HTTP_AIO_THREADS) {c-> file.thread_task = c-> thread_task / / the function registered here is called c-> file.thread_handler = ngx_http_cache_thread_handler; c-> file.thread_ctx = r in the ngx_thread_read function in the following statement; / / according to the properties of the task, select the correct thread pool and initialize each member of the task structure n = ngx_thread_read (& c-> file, c-> buf- > pos, c-> body_start, 0, r-> pool) C-> thread_task = c-> file.thread_task; c-> reading = (n = = NGX_AGAIN); return n;} # endif return ngx_read_file (& c-> file, c-> buf- > pos, c-> body_start, 0); static ngx_int_t ngx_http_cache_thread_handler (ngx_thread_task_t * task, ngx_file_t * file) {. Tp = clcf- > thread_pool;. Task- > event.data = r; / / register the thread_event_handler function, which is called task- > event.handler = ngx_http_cache_thread_event_handler; / / to put the task in the task queue of the thread pool when handling the event event in the pool_done queue. If (ngx_thread_task_post (tp, task)! = NGX_OK) {return NGX_ERROR }.} / * nginx/src/core/ngx_thread_pool.c * * / / add tasks to the queue ngx_int_t ngx_thread_task_post (ngx_thread_pool_t * tp) Ngx_thread_task_t * task) {/ / exit if (task- > event.active) {ngx_log_error (NGX_LOG_ALERT, tp- > log, 0, "task #% ui already active", task- > id) if the current task is being processed Return NGX_ERROR;} if (ngx_thread_mutex_lock (& tp- > mtx, tp- > log)! = NGX_OK) {return NGX_ERROR;} / / determine the relationship between the number of tasks waiting in the current thread pool and the maximum queue length if (tp- > waiting > = tp- > max_queue) {(void) ngx_thread_mutex_unlock (& tp- > mtx, tp- > log) Ngx_log_error (NGX_LOG_ERR, tp- > log, 0, "thread pool\"% V\ "queue overflow:% i tasks waiting", & tp- > name, tp- > waiting); return NGX_ERROR;} / / activate task task- > event.active = 1; task- > id = ngx_thread_pool_task_id++; task- > next = NULL / / notify the blocked thread that a new event has been added and unblock if (ngx_thread_cond_signal (& tp- > cond, tp- > log)! = NGX_OK) {(void) ngx_thread_mutex_unlock (& tp- > mtx, tp- > log); return NGX_ERROR;} * tp- > queue.last = task; tp- > queue.last = & task- > next; tp- > waiting++ (void) ngx_thread_mutex_unlock (& tp- > mtx, tp- > log); ngx_log_debug2 (NGX_LOG_DEBUG_CORE, tp- > log, 0, "task #% ui added to thread pool\"% V\ ", task- > id, & tp- > name); return NGX_OK;}
The above example basically shows the current use of thread pool by nginx. Using thread pool to deal with slow operations such as IO can improve the execution efficiency of the main thread of worker. Of course, when developing the module, users can also refer to the method of using thread pool in the file_cache module to call multi-thread to improve the performance of the program. (you are welcome to criticize and correct.)
Thank you for reading, hope to help you, thank you for your support to this site!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.