Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the advantages of Nginx's process model

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article focuses on "what are the advantages of Nginx's process model". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what are the advantages of Nginx's process model.

Nginx is famous for its high performance, stability, rich features, simple configuration and low resource consumption. This article analyzes why Nginx is so fast from the bottom principle!

Nginx server, during normal operation:

Multiple processes: one Master process, multiple Worker processes.

Master process: manages the Worker process. External interface: receiving external operations (signals); internal forwarding: according to external operations, Worker; monitoring through signal management: monitoring the running status of Worker processes. After abnormal termination of Worker processes, Worker processes are automatically restarted.

Worker processes: all Worker processes are equal. Actual processing: network requests, handled by the Worker process. Number of Worker processes: configured in nginx.conf, generally set to the number of cores to make full use of CPU resources, at the same time, avoid too many processes, avoid process competition for CPU resources, and increase the loss of context switching.

Think about:

Is the request processed and forwarded by connecting to the Nginx,Master process?

Which Worker process is selected to handle the request? Does the processing result of the request still go through the Master process?

When Nginx starts, the Master process loads the configuration file.

The Master process, which initializes the listening Socket.

Master process, Fork out of multiple Worker processes.

The Worker process competes for the new connection, and the winner establishes the Socket connection and processes the request through a three-way handshake.

Why does Nginx have high performance and support high concurrency?

Nginx adopts multi-process + asynchronous non-blocking mode (IO multiplexing Epoll).

The complete process of the request: establish a connection → read request → parse request → process request → response request.

The complete process of the request corresponds to the underlying layer: read and write Socket events.

HTTP request in Request:Nginx.

Basic HTTP Web Server mode of operation:

Receive the request: read the request line and the request header line by line, and read the request body after judging that there is a request body in the section.

Process the request.

Return response: based on the processing result, the corresponding HTTP request (response line, response header, response body) is generated.

Nginx is also the same routine, and the overall process is consistent:

The modules of Nginx can basically be divided into the following types according to their functions:

① event module: build a framework of event handling mechanism independent of the operating system, and provide the handling of specific events. Including ngx_events_module,ngx_event_core_module and ngx_epoll_module and so on.

Which event handling module Nginx uses depends on the specific operating system and compilation options.

② phase handler: this type of module is also directly referred to as a handler module. It is mainly responsible for processing the client request and generating the content to be responded, such as the ngx_http_static_module module, which is responsible for the static page request processing of the client and preparing the corresponding disk file for the response content output.

③ output filter: also known as the filter module, is mainly responsible for processing the output, you can modify the output.

For example, you can do things like adding predefined footbar to all output html pages, or replacing the URL of output images.

④ upstream: the upstream module implements the function of reverse proxy, forwards the real request to the back-end server, reads the response from the back-end server, and sends it back to the client.

The upstream module is a special kind of handler, except that the response content is not really generated by itself, but read from the back-end server.

⑤ load-balancer: a load balancing module that implements specific algorithms. Among many back-end servers, select a server as the forwarding server for a request.

Nginx vs Apache

Nginx:

IO Multiplexing, Epoll (kqueue on freebsd)

High performance

High concurrency

Take up less system resources

Apache:

Blocking + multiprocess / multithreading

More stable, less Bug

Richer modules

Reference article:

Http://www.oschina.net/transl... Https://www.zhihu.com/question/19571087

Maximum number of Nginx connections

Basic background:

Nginx is a multi-process model, and the Worker process is used to process requests.

The number of connections to a single process (file descriptor fd), with an upper limit (nofile): ulimit-n.

The maximum number of connections for a single Worker process configured on Nginx: the upper limit of worker_connections is nofile.

Number of Worker processes configured on Nginx: worker_processes.

Therefore, the maximum number of connections for Nginx:

Maximum number of connections for Nginx: number of Worker processes x maximum number of connections for a single Worker process.

Above is the maximum number of connections when Nginx is used as a general-purpose server.

The maximum number of connections that can be served when Nginx is used as a reverse proxy server: (number of Worker processes x maximum number of connections for a single Worker process) / 2.

When Nginx reverse proxies, a connection between the Client and the back-end Web Server is established, occupying 2 connections.

Think about:

Each Socket opened takes up one fd?

Why is there a limit to the number of fd that a process can open?

HTTP request and response

HTTP request:

Request line: method, uri, http version

Request header

Request body

HTTP response:

Response line: http version, status code

Response head

Response body

IO model

When processing multiple requests, you can use: IO multiplexing or blocking IO+ multithreading:

IO multiplexing: a thread that tracks multiple Socket states, which one is ready, and which one is read and written.

Block IO+ multithreading: create a new service thread for each request.

What are the scenarios for IO multiplexing and multithreading?

IO multiplexing: the request processing speed of a single connection has no advantage.

Large concurrency: only one thread is used to handle a large number of concurrent requests, reduce the context switching loss, and do not need to consider the concurrency problem, so it can handle more requests.

Consume less system resources (no thread scheduling overhead).

It is suitable for long connections (long connections in multi-thread mode can easily lead to too many threads and frequent scheduling).

Blocking IO + multithreading: easy to implement and can be independent of system calls.

Every thread needs time and space.

When the number of threads increases, the thread scheduling overhead increases exponentially.

The comparison between select/poll and epoll is as follows:

For details, please refer to:

Https://www.cnblogs.com/wiess...

Select/poll system call:

/ / select system call int select (int maxfdp,fd_set * readfds,fd_set * writefds,fd_set * errorfds,struct timeval * timeout); / / poll system call int poll (struct pollfd fds [], nfds_t nfds, int timeout)

Select:

To query whether there is a ready fd in the fd_set, you can set a timeout and return when there is a fd (File descripter) ready or timeout.

Fd_set is a collection of bits, the size is the constant when compiling the kernel, and the default size is 1024.

Features: limited number of connections, the number of fd that fd_set can represent is too small; linear scan: to determine whether fd is ready, you need to traverse one side of fd_set; data replication: user space and kernel space, copy connection ready state information.

Poll:

Solved the limit on the number of connections: in poll, the fd_set in select was replaced with an pollfd array, which solved the problem that the number of fd was too small.

Data replication: user space and kernel space, copy connection ready state information.

Epoll,event event driven:

Event mechanism: avoid linear scanning, register a listening event for each fd, and add fd to the ready list when the fd is changed to ready.

Number of fd: unlimited (OS level limit, how many fd can be opened by a single process).

Select,poll,epoll:

The mechanism of Ipaw O multiplexing.

Icano multiplexing uses a mechanism to monitor multiple descriptors, and once a descriptor is ready (usually read-ready or write-ready), it can inform the program to read and write accordingly; monitor multiple file descriptors.

But select,poll,epoll is essentially synchronous IO O: the user process is responsible for reading and writing (copying from kernel space to user space), and during reading and writing, the user process is blocked; asynchronous IO, which does not need the user process to read and write, is responsible for copying from kernel space to user space.

At this point, I believe you have a deeper understanding of "what are the advantages of Nginx's process model?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report