In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains why Nginx can support high concurrency. The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn why Nginx can support high concurrency.
Nginx is a free, open source, high-performance HTTP server and reverse proxy. It is famous for its high performance, stability, rich features, simple configuration and low resource consumption. Nginx is a Web server and can also be used as a load balancer and HTTP cache.
Many high-profile websites use Nginx, such as Netflix, GitHub, SoundCloud, MaxCDN and so on.
1. The overall structure of Nginx
1.1. Main process
When Nginx starts, two types of processes are generated, one is the main process (master), one (there is currently only one in the windows version), or multiple worker processes (worker).
The main process does not handle network requests, but is mainly responsible for scheduling the work process, that is, the three items shown in the diagram:
Load configuration
Start the work process
Non-stop upgrade
Therefore, after Nginx starts, looking at the list of processes in the operating system, we can see that there are at least two Nginx processes.
1.2. Working process
The server actually handles the network request and response is the work process (worker). On unix-like systems, Nginx can configure multiple worker, and each worker process can handle thousands of network requests at the same time.
1.3. Modular design
The worker process of Nginx includes core and functional modules. The core module is responsible for maintaining a running cycle (run-loop) and performing module functions at different stages of network request processing.
For example, network read and write, storage read and write, content transmission, outgoing filtering, and sending requests to upstream servers.
The modular design of the code also enables us to select and modify the functional modules according to the needs, and compile them into servers with specific functions.
1.4. Event-driven model
Based on the asynchronous and non-blocking event-driven model, it can be said that Nginx can achieve high concurrency and high performance. At the same time, it also benefits from the adoption of event notification and Imax O performance enhancement features in operating systems such as Linux, Solaris and BSD-like kernels, such as kqueue, epoll and event ports.
1.5. Agent (proxy) design
Agent design can be said to be the design of Nginx deep into the bone marrow. No matter for HTTP, or for the network request or response of FastCGI, Memcache, Redis and so on, the agent mechanism is essentially adopted. Therefore, Nginx is a high-performance proxy server by nature.
2. Modular design of Nginx
Highly modular design is the architectural foundation of Nginx. Nginx server is divided into several modules, each module is a functional module, only responsible for its own functions, the modules strictly follow the principle of "high cohesion, low coupling".
As shown in the following figure:
2.1. Core module
The core module is an indispensable module for the normal operation of the Nginx server, which provides core functions such as error logging, configuration file parsing, event-driven mechanism, process management and so on.
2.2. Standard HTTP module
The standard HTTP module provides functions related to HTTP protocol parsing, such as port configuration, web page coding settings, HTTP response header settings and so on.
2.3. Optional HTTP module
The optional HTTP module is mainly used to extend the standard HTTP functions, so that Nginx can handle some special services, such as: Flash multimedia transmission, parsing GeoIP requests, network transmission compression, security protocol SSL support and so on.
2.4. Mail service module
The mail service module is mainly used for mail services that support Nginx, including support for POP3 protocol, IMAP protocol and SMTP protocol.
2.5. Third-party module
The third-party module is to expand the Nginx server application and complete the developer-defined functions, such as Json support, Lua support and so on.
3. Nginx request processing
Nginx is a high performance Web server that can handle a large number of concurrent requests at the same time. It combines multi-process mechanism and asynchronous mechanism. Asynchronous mechanism uses asynchronous non-blocking mode. Next, we will introduce the multi-threaded mechanism and asynchronous non-blocking mechanism of Nginx.
3.1. Multi-process mechanism
Whenever the server receives a client, the server main process (master process) generates a child process (worker process) to establish a connection with the client and interact with it, until the connection is disconnected and the child process ends.
The advantage of using processes is that each process is independent of each other and does not need to be locked, which reduces the impact on performance caused by the use of locks, reduces the complexity of programming and reduces the development cost.
Second, the use of independent processes, so that processes will not affect each other, if one process exits abnormally and other processes work normally, the master process will quickly start a new worker process to ensure that the service will not be interrupted, thus minimizing the risk.
The disadvantage is that the operating system generates a child process that requires operations such as memory replication, which will incur some overhead in terms of resources and time. When there are a large number of requests, it will lead to a decline in system performance.
3.2. Asynchronous non-blocking mechanism
Each worker process uses an asynchronous non-blocking mode and can handle multiple client requests.
When a worker process receives a request from the client, it calls IO to process it. If it cannot get the result immediately, it processes other requests (that is, non-blocking). During this period, the client does not have to wait for a response, but can handle other things (that is, asynchronous).
When IO returns, the worker process is notified, and the process is notified to temporarily suspend the currently processed transaction in response to the client request.
4. Nginx event-driven model
In Nginx's asynchronous non-blocking mechanism, the worker process processes other requests after calling IO, and notifies the worker process when the IO call returns.
Such system calls are mainly implemented using the event-driven model of the Nginx server, as shown in the following figure:
As shown in the figure above, the event-driven model of Nginx consists of three basic units: event collector, event sender and event processor.
Event Collector: responsible for collecting various IO requests from worker processes
Event sender: responsible for sending IO events to the event handler
Event handler: responsible for the response of various events.
The event sender puts each request into a list of pending events and invokes the event handler in a non-blocking Istroke O way to process the request.
Its processing method is called "multiplex IO multiplexing method", which includes the following three common methods: select model, poll model and epoll model.
5. Nginx process processing model
The Nginx server uses master/worker multi-process mode, and the process of multi-thread startup and execution is as follows:
After the main program Master process starts, it receives and processes external signals through a for loop.
The main process generates worker child processes through the fork () function, and each child process executes a for loop to receive and handle events on the Nginx server.
It is generally recommended that the number of worker processes is the same as the number of CPU cores, so that there are not a large number of child process generation and management tasks, avoiding the overhead of competing for CPU resources and process switching between processes.
And in order to make better use of the multi-core feature, Nginx provides the binding option of CPU affinity, we can bind a certain process to a certain core, so that it will not cause Cache failure because of process switching.
For each request, one and only one worker process processes it. First, each worker process is fork from the master process. In the master process, first set up the socket (listenfd) that needs listen, and then fork out multiple worker processes.
The listenfd of all worker processes becomes readable when a new connection arrives, and to ensure that only one process handles the connection, all worker processes preempt the accept_mutex before registering the listenfd read event
The process that grabs the mutex registers the listenfd read event, and calls accept in the read event to accept the connection.
After a worker process accept the connection, it begins to read the request, parse the request, process the request, generate data, return it to the client, and finally disconnect the connection. A complete request is like this.
We can see that a request is handled entirely by the worker process and only in one worker process.
As shown in the following figure:
During the operation of the Nginx server, the main process and the worker process need process interaction. The interaction depends on the pipeline of the Socket implementation.
5.1. The main process interacts with the worker process
This pipeline is different from the ordinary pipeline, it is an one-way pipeline from the main process to the worker process, including the instructions issued by the main process to the worker process, as the process ID, and so on. At the same time, the main process communicates with the outside world through signals; each child process has the ability to receive signals and deal with corresponding events.
5.2. Work process interacts with work process
This interaction is basically the same as the main process-worker process interaction, but it is done indirectly through the main process, and the working processes are isolated from each other.
So when the worker process W1 needs to send instructions to the worker process W2, first find the process ID of W2, and then write the correct instruction to the channel pointing to W2, and W2 receives the signal to take corresponding measures.
Thank you for reading, the above is the content of "Why Nginx can support high concurrency". After the study of this article, I believe you have a deeper understanding of why Nginx can support high concurrency, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.