Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the web request handling mechanism of Nginx?

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains the "Nginx web request processing mechanism is how", the content of the article is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in-depth, together to study and learn "Nginx web request processing mechanism is how" it!

I. Modularization of Nginx

The idea of modular structure has been a concept for a long time, but it is the mature idea that has created the great superiority of Nginx.

We know that Nginx is generally made up of many modules. It is customary to divide Nginx into five modules: core module, standard HTTP module, optional HTTP module, mail service module and third-party module.

The importance of these five modules decreases from top to bottom.

(1) Core module

The core module is an indispensable module for the normal operation of the Nginx server, just like the kernel of the operating system. It provides the most basic core services of Nginx. Such as process management, rights control, error logging, etc.

(2) Standard HTTP module

Standard HTTP modules support the functionality of standard HTTP

(3) optional HTTP module

The optional HTTP module is mainly used to extend the standard HTTP functions so that Nginx can handle some special services.

(4) Mail service module

The mail service module is mainly used for mail services that support Nginx

(5) third-party module

The purpose of the third-party module is to extend the Nginx server application and complete the functions that developers want.

* Module naming in Nginx has its own habits *

It is generally prefixed with Ngx_ and suffixed with-module, and one or more English words are used to describe the power of the module. For example, Ngx_core_module indicates that the module provides the core functions of Nginx, etc.

Which modules are included in each module can be queried in the source code by yourself, which is skipped here

II. Web request processing mechanism of Nginx

Architecturally speaking, the Nginx server is different. One lies in its modular design, and the second and more important point lies in its mechanism for handling requests from clients.

There is an one-to-many relationship between web server and client, and Web server must be able to provide services to multiple clients at the same time. In general, there are three ways to process requests in parallel:

1. Multi-process mode

two。 Multithreaded mode

3. Asynchronous mode

Here is a brief explanation of these three ways:

(1) Multi-process mode

The multi-process approach refers to each time the server receives a client. The server main process generates a child process to establish a connection with the client to interact. Instruct to disconnect. The subprocess ends.

The advantage of the multi-process approach is that the design is simple, the sub-processes are relatively independent, and the client requests are not disturbed by each other; the disadvantage is that the operating system generates a sub-process that requires operations such as memory replication, which will incur some overhead in resources and time; when there are a large number of requests, it will lead to a decline in system performance.

(2) Multithreading mode

Multithreading means that every time the server receives a request, a thread is derived from the main process of the server to interact with the client. Because the cost of a thread generated by the operating system is much less than that of a process. Therefore, the multithreading mode reduces the system resource requirements of the Web server to a great extent. But at the same time, because multiple threads are in the same process, you can access the same memory space. Therefore, it is more difficult for developers to manage their own memory processes.

(3) Asynchronous mode

Asynchronous mode is suitable for a completely different way of handling client requests between multi-processes and multi-threads. Here are a few concepts we need to familiarize ourselves with: synchronous, asynchronous, blocking, non-blocking.

Synchronization and asynchronism are the concepts that describe communication patterns in network communication.

Synchronization: after sending a request, the sender needs to wait for the response from the receiver before sending the next request; all requests are synchronized on the server side, and the pace of the sender and receiver is the same.

Async: contrary to the synchronous mechanism, in the asynchronous mechanism, after the sender makes a request, it continues to send the next request without waiting for the receiver to respond to the request; all requests from the sender form a queue, and the receiver notifies the sender when the processing is complete.

Blocking and non-blocking are used in the process processing scheduling mode. In network communication, it mainly refers to the blocking and non-blocking of socket socket, and the essence of socket is IO operation.

Blocking: before the result of the call returns, the current thread is suspended from the running state and waits until the result of the call is returned to enter the ready state. After getting the CPU, the thread continues to execute.

Non-blocking: contrary to blocking, if the result of the call does not return immediately, the current thread will not return immediately, but will immediately return to execute the next call.

Therefore, four modes are derived: synchronous blocking, synchronous non-blocking, asynchronous blocking, asynchronous non-blocking.

Here is a brief explanation of asynchronous non-blocking: after sending a request to the receiver, the sender can continue other work without waiting for a response; if the IO operation performed by the receiver when processing the request can not get the result immediately, it does not have to wait, but immediately returns to do something else. When the IO operation is completed, the receiver is notified of the completion status and result, and the receiver responds to the sender.

At the same time, how does the Nginx server handle requests?

A significant advantage of the Nginx server is that it can handle a large number of concurrent requests at the same time. It combines multi-process mechanism and asynchronous mechanism. The asynchronous mechanism uses an asynchronous non-blocking mode. (Master-Worker).

Each worker process uses an asynchronous non-blocking mode and can handle multiple client requests. When a worker process receives a request from the client, it calls IO to process it, and if it cannot get the result immediately, it processes other requests; the client does not have to wait for a response, but can handle other things; when the IO returns, it notifies the worker process; the process is notified and temporarily suspends the current processing error to respond to the client request.

That is:

Nginx processes requests in an asynchronous and non-blocking manner, which is specific to the bottom layer of the system, that is, read and write events (the so-called blocking invocation mode means that the request event is not ready, and the thread can only wait until the event is ready. Non-blocking means that if the event is not ready, return to ENGAIN immediately to tell you that the event may not be ready, and in the meantime, you can do something else first, and then look back to see if the event is ready.

Asynchronism can be understood as processing multiple prepared events in a loop, which will not lead to unnecessary waste of resources. When there are more concurrency numbers, it will only take up more memory.

3. The practice-driven model of Nginx server

We can see from the above that after the worker process of the Nginx server calls IO, it will take it for other work; when the IO call returns, it will notify the worker process. But how do you notify the worker process of its status when IO is called?

Generally, there are two ways to solve this problem: (1) let the worker process check the status of IO at intervals while doing other work, respond to the client if it is finished, and continue to work if it is not finished.

(2) the IO call can actively notify the worker process after it is completed.

Of course, the best thing is to use the second approach; system calls such as select/poll/epoll are used to support the second solution. These system calls are also often referred to as event-driven models. They provide a mechanism that only allows the process to process multiple concurrent requests at the same time, regardless of the specific state of the IO call. IO calls are managed entirely by the event-driven model.

Event-driven Model in Nginx

Is to use event-driven processing library (multiplex IO reuse), the most commonly used is the select model, poll model, epoll model.

A detailed explanation of the three models can be found here: https://segmentfault.com/a/1190000003063859

four。 Introduction to Architectur

Through this simple explanation above, coupled with the understanding of the server architecture, we can have a simple understanding of Nginx. I hope it will be helpful to the later source code analysis.

Roughly speaking, the architecture of Nginx is like this:

After 1.Nginx starts, a main process is generated, and one or more working processes are generated after the main process executes a series of work.

two。 When the client requests a dynamic site, the Nginx server also involves communication with the back-end server. Nginx forwards the received Web requests to the back-end server through the proxy, which processes and organizes the data.

In order to improve the efficiency of response to requests and reduce the pressure on the network, 3.Nginx adopts a caching mechanism to cache the historical response data locally. Ensure fast access to cache files

# # work process # #

The main tasks of the work process are as follows:

Receive client request

Send the request to each function module at one time for filtering

IO call to get response data

Communicate with the back-end server and receive the processing results of the back-end server

Data caching

Respond to client request

# # process interaction # #

When using the Master-Worker model, the Nginx server will involve the interaction between the main process and the worker process and the interaction between the worker process. Both types of interactions depend on the plumbing mechanism.

1.Master-Worker interaction

This pipeline is different from the ordinary pipeline, it is an one-way pipeline from the main process to the worker process, including the instructions issued by the main process to the worker process, the worker process ID, etc.; at the same time, the main process communicates with the outside world through signals

2.worker-worker interaction

This interaction is basically consistent with Master-Worker interaction. But through the main process. The worker processes are isolated from each other, so when the worker process W1 needs to send instructions to the worker process W2, first find the process ID of W2, and then write the correct instruction to the channel pointing to W2. W2 receives the signal and takes corresponding measures.

Thank you for your reading, the above is the content of "what is the web request processing mechanism of Nginx". After the study of this article, I believe you have a deeper understanding of how the web request handling mechanism of Nginx is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report