In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Preface
Nginx uses a multi-process mode, and when a request comes, the system locks the process to ensure that only one process accepts the request.
This paper is based on the source code of Nginx 0.8.55 and the analysis of epoll mechanism.
1. Implementation of accept lock
1.1 what is an accpet lock?
When it comes to accept locks, you have to raise the question of surprise.
The so-called shock group problem refers to a multi-process server like Nginx, when listening to the same port at the same time after fork, if there is an external connection, it will cause all dormant child processes to be awakened, and in the end, only one child process can successfully handle the accept event, and other processes will go back to sleep. This leads to a lot of unnecessary schedule and context switching, which are completely unnecessary.
In the newer version of the Linux kernel, the shock problem caused by the accept call itself has been solved, but in Nginx, the accept is handled by the epoll mechanism, and the shock problem caused by epoll's accept has not been solved. (it should be that epoll_wait itself does not distinguish whether the event comes from a Listen socket, so all processes listening for this event will be awakened by this epoll_wait. So Nginx's accept shock problem still needs to be customized.
Accept lock is the solution of nginx, which is essentially a cross-process mutex, which ensures that only one process has the ability to listen for accept events.
The accept lock on the implementation is a cross-process lock, which is a global variable in Nginx, declared as follows:
Ngx_shmtx_t ngx_accept_mutex
This is a lock allocated when the event module is initialized and placed in a shared memory between processes to ensure that all processes can access this instance. The lock and unlock is done by CAS through the atomic variable of linux. If the lock fails, it will be returned immediately, which is a non-blocking lock. The code for adding and unlocking is as follows:
Static ngx_inline ngx_uint_t ngx_shmtx_trylock (ngx_shmtx_t * mtx) {return (* mtx- > lock = = 0 & & ngx_atomic_cmp_set (mtx- > lock, 0, ngx_pid)) } # define ngx_shmtx_lock (mtx) ngx_spinlock ((mtx)-> lock, ngx_pid, 1024) # define ngx_shmtx_unlock (mtx) (void) ngx_atomic_cmp_set ((mtx)-> lock, ngx_pid, 0)
As you can see, when a call to ngx_shmtx_trylock fails, it returns immediately without blocking.
1.2 how accept locks ensure that only one process can handle new connections
To solve the problem of accept locks caused by epoll, it is also simple to ensure that only one process registers the epoll event of accept at a time.
The processing mode adopted by Nginx is nothing special, which is probably the following logic:
Attempt to acquire accept lock
If obtained successfully:
Register the accept event in epoll
Else:
Log out of the accept event in epoll
Handle all events
Release the accept lock
Of course, the handling of the deferred event is ignored here, and we will discuss this part later.
The handling of accept locks and the registration and logout of accept events in epoll are done in ngx_trylock_accept_mutex. This series of procedures are carried out in the void ngx_process_events_and_timers (ngx_cycle_t * cycle) called repeatedly in the nginx body loop.
In other words, the processing of each round of events will first compete for the accept lock, the success of the competition will register the accept event in the epoll, and if it fails, the accept event will be logged out, and then the accept lock will be released after the event is processed. As a result, only one process listens on an listen socket, which avoids crowd problems.
1.3 what efforts have been made by the event handling mechanism to occupy the accept lock for a short time
The solution of the accept lock to deal with the shock problem may seem beautiful, but if you fully use the above logic, there will be a problem: if the server is very busy and there are many events to handle, then the step of "handling all events" will take a very long time, that is, a process will occupy the accept lock for a long time and have no time to handle new connections. Other processes do not occupy accept locks and are also unable to handle new connections-- at this point, new connections are left unhandled, which is undoubtedly fatal to the real-time performance of the service.
To solve this problem, Nginx adopts the method of delaying event processing. That is, in ngx_process_events processing, events are only put into two queues:
Ngx_thread_volatile ngx_event_t * ngx_posted_accept_events; ngx_thread_volatile ngx_event_t * ngx_posted_events
After returning, the ngx_posted_accept_events is processed and the accept lock is released immediately, and then other events are handled slowly.
That is, ngx_process_events only deals with epoll_wait, and the consumption of events is placed after the release of the accept lock, to minimize the time to occupy accept, so that other processes have enough time to deal with accept events.
So how exactly is it achieved? In fact, you can pass a flag bit of NGX_POST_EVENTS in the flags parameter of static ngx_int_t ngx_epoll_process_events (ngx_cycle_t * cycle, ngx_msec_t timer, ngx_uint_t flags), and check this flag bit when handling the event.
This only avoids the long-term occupation of accept locks by the consumption of events, so does the epoll _ wait itself take a long time? It's not impossible for this to happen. The handling of this aspect is also very simple, epoll_wait itself has a timeout, just limit its value, this parameter is saved in the global variable ngx_accept_mutex_delay.
Put the implementation code of ngx_process_events_and_timers below to take a look at the related processing:
Void ngx_process_events_and_timers (ngx_cycle_t * cycle) {ngx_uint_t flags; ngx_msec_t timer, delta / * omit some code for handling time events * / / here is the time to deal with load balancer lock and accept lock if (ngx_use_accept_mutex) {/ / if the value of load balancer token is greater than 0, the load is full Accept is no longer processed at this time, and the value is reduced by one if (ngx_accept_disabled > 0) {ngx_accept_disabled-- } else {/ / try to get the accept lock if (ngx_trylock_accept_mutex (cycle) = = NGX_ERROR) {return } / / add the post flag to flag after getting the lock, so that the processing of all events can be delayed / / so as not to occupy accept lock if (ngx_accept_mutex_held) {flags | = NGX_POST_EVENTS for too long. } else {if (timer = = NGX_TIMER_INFINITE | | timer > ngx_accept_mutex_delay) {timer = ngx_accept_mutex_delay / / wait up to ngx_accept_mutex_delay milliseconds to prevent the accept lock from taking too long} delta = ngx_current_msec / / call the process_events of the event handling module to handle an epoll_wait method (void) ngx_process_events (cycle, timer, flags); delta = ngx_current_msec-delta / / calculate the time spent processing events events ngx_log_debug1 (NGX_LOG_DEBUG_EVENT, cycle- > log, 0, "timer delta:% M", delta) / / if there is a deferred accept event, then defer processing of this event if (ngx_posted_accept_events) {ngx_event_process_posted (cycle, & ngx_posted_accept_events) } / / release accept lock if (ngx_accept_mutex_held) {ngx_shmtx_unlock (& ngx_accept_mutex) } / / handle all timeout events if (delta) {ngx_event_expire_timers () } ngx_log_debug1 (NGX_LOG_DEBUG_EVENT, cycle- > log, 0, "posted events% p", ngx_posted_events); if (ngx_posted_events) {if (ngx_threaded) {ngx_wakeup_worker_thread (cycle) } else {/ / handles all deferred events ngx_event_process_posted (cycle, & ngx_posted_events);}
Let's take a look at the relevant processing of ngx_epoll_process_events:
/ / read the event if ((revents & EPOLLIN) & & rev- > active) {if ((flags & NGX_POST_THREAD_EVENTS) & &! rev- > accept) {rev- > posted_ready = 1;} else {rev- > ready = 1 } if (flags & NGX_POST_EVENTS) {queue = (ngx_event_t * *) (rev- > accept? & ngx_posted_accept_events: & ngx_posted_events); ngx_locked_post_event (rev, queue);} else {rev- > handler (rev) }} wev = c-> write; / / write event if ((revents & EPOLLOUT) & & wev- > active) {if (flags & NGX_POST_THREAD_EVENTS) {wev- > posted_ready = 1;} else {wev- > ready = 1;} if (flags & NGX_POST_EVENTS) {ngx_locked_post_event (wev, & ngx_posted_events) } else {wev- > handler (wev);}}
Processing is also relatively simple. If you get the accept lock, there will be a NGX_POST_EVENTS flag and it will be placed in the appropriate queue. If not, the incident will be dealt with directly.
Summary
The above is the whole content of this article, I hope that the content of this article has a certain reference and learning value for your study or work, if you have any questions, you can leave a message and exchange, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.