Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the processing flow of the Nginx event-driven framework

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "what is the processing flow of Nginx event-driven framework". In daily operation, I believe many people have doubts about what the processing flow of Nginx event-driven framework is. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful for you to answer the doubts about "what is the processing flow of Nginx event-driven framework?" Next, please follow the editor to study!

The ngx_event_process_init method of the ngx_event_core_module module does some initialization of the event module. This includes setting the handling method (handler) for a read event such as request connection to the ngx_event_accept function and adding this event to the epoll module. Ngx_event_accept is called when a new connection event occurs. The general process is as follows:

The worker process iteratively calls the ngx_process_events_and_timers function to handle events in the ngx_worker_process_cycle method, which is the total entry for event handling.

Ngx_process_events_and_timers calls ngx_process_events, which is a macro, which means that ngx_event_actions.process_events,ngx_event_actions is a global structure that stores 10 function interfaces for the corresponding event-driven module (in this case, the epoll module). So here is the call to the ngx_epoll_module_ctx.actions.process_events function, the ngx_epoll_process_events function, to handle the event.

Ngx_epoll_process_events calls the linux function interface epoll_wait to get the "there is a new connection" event, and then calls the event's handler handler to handle the event.

As mentioned above, handler has been set to the ngx_event_accept function, so call ngx_event_accept for actual processing.

Let's analyze the ngx_event_accept method, and its flow chart is as follows:

The simplified code is as follows, and the sequence number in the comment corresponds to the sequence number in the figure above:

Voidngx_event_accept (ngx_event_t * ev) {socklen_t socklen; ngx_err_t err; ngx_log_t * log; ngx_uint_t level; ngx_socket_t s; ngx_event_t * rev, * wev; ngx_listening_t * ls; ngx_connection_t * c, * lc; ngx_event_conf_t * ecf; u_char sa [NGX _ sockaddrlen] If (ev- > timedout) {if (ngx_enable_accept_events ((ngx_cycle_t *) ngx_cycle)! = ngx_ok) {return;} ev- > timedout = 0;} ecf = ngx_event_get_conf (ngx_cycle- > conf_ctx, ngx_event_core_module); if (ngx_event_flags & ngx_use_rtsig_event) {ev- > available = 1 } else if (! (ngx_event_flags & ngx_use_kqueue_event)) {ev- > available = ecf- > multi_accept;} lc = ev- > data; ls = lc- > listening; ev- > ready = 0; do {socklen = ngx_sockaddrlen; / * 1, accept method attempts to establish a connection, non-blocking call * / s = accept (lc- > fd, (struct sockaddr *) sa, & socklen) If (s = = (ngx_socket_t)-1) {err = ngx_socket_errno; if (err = = ngx_eagain) {/ * No connection, directly return * / return;} level = ngx_log_alert; if (err = = ngx_econnaborted) {level = ngx_log_err;} else if (err = = ngx_emfile | | err = = ngx_enfile) {level = ngx_log_crit } if (err = = ngx_econnaborted) {if (ngx_event_flags & ngx_use_kqueue_event) {ev- > available--;} if (ev- > available) {continue;}} if (err = = ngx_emfile | | err = = ngx_enfile) {if (ngx_disable_accept_events ((ngx_cycle_t *) ngx_cycle)! = ngx_ok) {return } if (ngx_use_accept_mutex) {if (ngx_accept_mutex_held) {ngx_shmtx_unlock (& ngx_accept_mutex); ngx_accept_mutex_held = 0;} ngx_accept_disabled = 1;} else {ngx_add_timer (ev, ecf- > accept_mutex_delay);}} return } / * 2. Set load balancing threshold * / ngx_accept_disabled = ngx_cycle- > connection_n / 8-ngx_cycle- > free_connection_n; / * 3, get a connection object from connection pool * / c = ngx_get_connection (s, ev- > log); / * 4. Create memory pool for connection * / c-> pool = ngx_create_pool (ls- > pool_size, ev- > log) C-> sockaddr = ngx_palloc (c-> pool, socklen); ngx_memcpy (c-> sockaddr, sa, socklen); log = ngx_palloc (c-> pool, sizeof (ngx_log_t)) / * set a blocking mode for aio and non-blocking mode for others * / / * 5. Set socket attribute to blocking or non-blocking * / if (ngx_inherited_nonblocking) {if (ngx_event_flags & ngx_use_aio_event) {if (ngx_blocking (s) = =-1) {ngx_log_error (ngx_log_alert, ev- > log, ngx_socket_errno, ngx_blocking_n "failed") Ngx_close_accepted_connection (c); return;} else {if (! (ngx_event_flags & (ngx_use_aio_event | ngx_use_rtsig_event) {if (ngx_nonblocking (s) = =-1) {ngx_log_error (ngx_log_alert, ev- > log, ngx_socket_errno, ngx_nonblocking_n "failed"); ngx_close_accepted_connection (c); return } * log = ls- > log; c-> recv = ngx_recv; c-> send = ngx_send; c-> recv_chain = ngx_recv_chain; c-> send_chain = ngx_send_chain; c > log = log; c > pool- > log = log; c > socklen = socklen; c-> listening = ls; c-> local_sockaddr = ls- > sockaddr; c > local_socklen = ls- > socklen; c > unexpected_eof = 1; rev = c-> read Wev = c-> write; wev- > ready = 1; if (ngx_event_flags & (ngx_use_aio_event | ngx_use_rtsig_event)) {/ * rtsig, aio, iocp * / rev- > ready = 1;} if (ev- > deferred_accept) {rev- > ready = 1;} rev- > log = log; wev- > log = log / * * todo: mt:-ngx_atomic_fetch_add () * or protection by critical section or light mutex * * todo: mp:-allocated in a shared memory *-ngx_atomic_fetch_add () * or protection by critical section or light mutex * / c-> number = ngx_atomic_fetch_add (ngx_connection_counter, 1) If (ls- > addr_ntop) {c-> addr_text.data = ngx_pnalloc (c-> pool, ls- > addr_text_max_len); if (c-> addr_text.data = = null) {ngx_close_accepted_connection (c); return;} c-> addr_text.len = ngx_sock_ntop (c-> sockaddr, c-> socklen, c-> addr_text.data, ls- > addr_text_max_len, 0) If (c-> addr_text.len = = 0) {ngx_close_accepted_connection (c); return;}} / * 6. Add read and write events corresponding to a new connection to the epoll object * / if (ngx_add_conn & (ngx_event_flags & ngx_use_epoll_event) = = 0) {if (ngx_add_conn (c) = = ngx_error) {ngx_close_accepted_connection (c); return }} log- > data = null; log- > handler = null; / * 7, tcp establishes the method successfully called in the ngx_listening_t structure * / ls- > handler (c);} while (ev- > available); / * the available flag indicates that as many connections as possible are established at one time, as determined by the configuration item multi_accept * /}

The problem of "surprise Group" in nginx

Nginx typically runs multiple worker processes that listen on the same port at the same time. When a new connection arrives, the kernel wakes up all these processes, but only one process can successfully connect to the client, resulting in other processes wasting a lot of overhead when waking up, which is called "panic group" phenomenon. Nginx's way to solve the "panic group" is to let the process get the mutually exclusive lock NGX _ accept_mutex, so that the process enters a critical section mutually exclusive. In this critical section, the process adds the read event corresponding to the connection it is listening to to the epoll module so that the worker process will react when a "new connection" event occurs. The process of locking and adding events is done in the function ngx_trylock_accept_mutex. When other processes enter the function to add read events, they find that the mutex is held by another process, so it can only return, and the events it listens to cannot be added to the epoll module, thus unable to respond to the "new connection" event. But this raises a question: when will the process that holds the mutex release the mutex? If you need to wait for it to process all the events before releasing the lock, it will take a long time. During this time, other worker processes are unable to establish new connections, which is obviously not desirable. The solution for nginx is to acquire the mutex process through ngx_trylock_accept_mutex, and after obtaining the ready read / write events and returning from epoll_wait, classify them into a queue:

New connection events are placed in the ngx_posted_accept_events queue

Existing connection events are placed in the ngx_posted_events queue

The code is as follows:

If (flags & ngx_post_events) {/ * defer processing of these events * / queue = (ngx_event_t * *) (rev- > accept? & ngx_posted_accept_events: & ngx_posted_events); / * add events to the deferred execution queue * / ngx_locked_post_event (rev, queue);} else {rev- > handler (rev); / * handle events immediately without delay * /}

Write events to do similar processing. The process then processes the events in the ngx_posted_accept_events queue, releasing the mutex immediately after processing, minimizing the time the process takes to occupy the lock.

Load balancing in nginx

Each process in nginx uses a threshold ngx_accept_disabled for load balancing, which is initialized in step 2 of the figure above:

Ngx_accept_disabled = ngx_cycle- > connection_n / 8-ngx_cycle- > free_connection_n

Its initial value is a negative number, and the absolute value of the negative number is equal to 7 prime 8 of the total number of connections. Respond to new connection events normally when the threshold is less than 0, no longer respond to new connection events when the threshold is greater than 0, and subtract ngx_accept_disabled by 1. The code is as follows:

If (ngx_accept_disabled > 0) {ngx_accept_disabled--;} else {if (ngx_trylock_accept_mutex (cycle) = = ngx_error) {return;}....}

This shows that when the current number of connections of a process reaches 7 / 8 of the total number of connections that can be processed, the load balancing mechanism is triggered and the process stops responding to new connections.

At this point, the study of "what is the processing flow of the Nginx event-driven framework" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report