In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article shows you how to understand Nginx process scheduling, the content is concise and easy to understand, can definitely brighten your eyes, through the detailed introduction of this article, I hope you can get something.
Nginx adopts a fixed number of multi-process model, which consists of a main process (MasterProcess) and working processes with the same number of cores as the host CPU to deal with various events.
Nginx adopts a fixed number of multi-process model, which consists of a main process (MasterProcess) and working processes with the same number of cores as the host CPU to deal with various events.
The main management process is responsible for the configuration loading, start and stop and other operations of the worker process, and the worker process is responsible for handling specific requests. The resources between processes are independent, each worker process handles multiple connections, and each connection is handled by a worker process. There is no need for process switching, so there will be no resource consumption caused by process switching. By default, the number of worker processes is the same as the number of host CPU cores. Make full use of CPU and process affinity (affinity) to bind worker processes to CPU, so as to maximize the processing power of multi-core CPU.
The Nginx main process is responsible for listening for external control signals, transmitting related signal operations to the worker process through the channel mechanism, and sharing data and information among multiple working processes through shared memory.
Tips: process affinity (affinity) that causes a process or thread to run on a specified CPU (core).
The worker process of Nginx can be scheduled in the following ways:
No scheduling mode: all worker processes will scramble to establish a connection with the client when the connection event is triggered, and if the connection is successful, the client request will be processed. In the non-scheduling mode, all processes will compete for resources, but in the end, only one process can establish a connection with the client, which will instantly consume a lot of resources for the system, which is the so-called panic group phenomenon.
Mutex mode: each worker process periodically scrambles for mutex locks. Once a worker process grabs the mutex lock, it means that it has the right to receive HTTP connection events, and injects the socket snooping of the current process into the event engine (such as epoll) to receive external connection events. Other working processes can only continue to process read and write events that have established connections, and periodically poll to check the status of the mutex. Only after the mutex is released can the worker preempt the mutex and obtain the right to handle HTTP connection events. When the difference between the maximum number of connections of the working process and the available connections (free_connection) of the working process is greater than or equal to 1, the opportunity of the current round of scrambling for mutexes will be abandoned, no new connection requests will be received, and only read and write events of established connections will be handled. The mutex mode effectively avoids the shock phenomenon. For a large number of HTTP short connections, this mechanism effectively avoids the resource consumption caused by the worker process scrambling for the right to handle events. However, for a large number of HTTP connections with long connections, the mutex mode will concentrate the pressure on a small number of workers, which will lead to the decrease of QPS due to the uneven load of the workers.
Socket fragmentation: socket slicing is an allocation mechanism provided by the kernel that allows each worker process to have the same set of listening sockets. When there is an external connection request, it is up to the kernel to decide which worker process's socket listener can receive the connection. This effectively avoids the occurrence of the shock group phenomenon and improves the performance of the multi-core system compared with the mutex mechanism. This feature requires that the reuseport parameter be enabled when configuring the listen directive.
Mutex mode is turned off by default in Tips:Nginx versions later than 1.11.3. Socket sharding mode has the best performance because the process scheduling mechanism is provided by the Linux kernel.
The above content is how to understand Nginx process scheduling. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.