In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge about "how Nginx core architecture supports high concurrency". In the actual case operation process, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
preface
Nginx, as a well-known high-performance server in the industry, is widely used. Its high performance is due to its excellent architectural design, which mainly includes these points: modular design, event-driven architecture, multi-phase asynchronous processing of requests, management process and multi-worker process design, memory pool design, and the following are explained in turn.
modular design
A highly modular design is the architectural foundation of Nginx. In Nginx, everything except for a small amount of core code is modules.
All modules are hierarchical and classified. Nginx officially has five types of modules: core module, configuration module, event module, HTTP module and mail module. The relationship between them is as follows:
Of these five modules, the configuration module and core module are closely related to the Nginx framework. The event module is the basis of the HTTP module and the mail module. HTTP modules and mail modules have similar "status" in that they focus more on the application level.
event-driven architecture
Event-driven architecture, simply put, is to generate events by some event sources, collect and distribute events by event collectors, and then process these events by event handlers (event handlers need to register the events they want to process in the event collectors first).
For Nginx servers, events are generally generated by network cards and disks. The event module in Nginx is responsible for collecting and distributing events; all modules may be event consumers. They first need to register the event type of interest to the event module. In this way, when an event is generated, the event module will distribute the event to the corresponding module for processing.
For traditional web servers (such as Apache), the so-called event-driven is often limited to TCP connection establishment and closure events. After a connection is established, all operations before it is closed are no longer event-driven, and then it will degenerate into batch processing mode in which each operation is executed in sequence. In this way, each request will always occupy system resources after the connection is established, and resources will not be released until it is closed. This pattern of requests hogging server resources waiting to be processed results in a significant waste of server resources.
As shown in the figure below, traditional web servers often regard a process or thread as a time consumer. When an event generated by a request is processed by the process, the process resources will be occupied by the request until the end of the request processing. A typical example is Apache's synchronous blocking multi-process pattern.
A simple model of how traditional web servers handle events (rectangles represent processes):
Nginx uses an event-driven architecture to process business differently than traditional web servers. It doesn't use processes or threads as event consumers; an event consumer can only be a module. Only event collectors and distributors are qualified to occupy process resources. When they distribute an event, they will invoke the event consumption module to use the currently occupied process resources. As shown in the figure below, five different events are listed. In the process of event collector and distributor, after these five events are collected in sequence, they will start to use the current process to distribute events, thus invoking the corresponding event consumers to process events. Of course, such distribution and invocation are also orderly.
A simple model of Nginx event processing:
As can be seen from the above figure, when processing request events, Nginx event consumers are only called by the event distributor process for a short time. This design improves network performance and user-perceived request latency. Events generated by each user's request will be responded in time, and the network throughput of the entire server will increase due to timely response to events. Of course, this also brings certain requirements, that is, each event consumer can not have blocking behavior, otherwise it will cause other events not to be responded to in time due to the long occupation of the event distributor process. The non-blocking feature of Nginx is because its modules meet this requirement.
Multi-phase asynchronous processing of requests
Multi-phase asynchronous processing of requests is closely related to event-driven architecture, that is, multi-phase asynchronous processing of requests can only be implemented based on event-driven architecture. Multi-phase asynchronous processing is to divide the processing process of a request into multiple phases according to the triggering mode of events, and each phase can be triggered by event collectors and distributors.
When processing HTTP requests for obtaining static files, the stages of segmentation and the triggering events of each stage are as follows:
In this example, the request is roughly divided into seven stages, which can occur repeatedly, so a download static resource request may be broken down into hundreds or thousands of stages listed above due to factors such as excessive data request, unstable network speed, etc.
Asynchronous processing and multi-phase are complementary, and only when the request is divided into multiple phases can there be so-called asynchronous processing. When a time is distributed to an event consumer for processing, the event consumer completes processing the event only as much as processing one request. When can we handle the next phase? This can only wait for notification from the kernel, that is, when the next event occurs, the event dispatcher such as epoll will get the notification and then call the event consumer to handle it.
Management process, multi-work process design
When Nginx starts, it has a master process and multiple worker processes. The master process is mainly used to manage the worker process, including receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, and starting the worker process. The worker process is used to process request events from clients. Multiple worker processes are peer-to-peer, competing equally for requests from clients, independent of each other, and a request can only be processed in one worker process. The number of worker processes can be set, generally consistent with the number of CPU cores of the machine. The reason for this is related to the event processing model. Nginx's process model can be represented by the following figure:
View Nginx progress on server:
This design offers the following advantages:
Take advantage of concurrent processing capabilities of multicore systems
Modern operating systems already support multicore CPU architectures, which allow multiple processes to work on different CPU cores. All worker processes in Nginx are completely equal. This improves network performance and reduces latency of requests.
Load Balancer
Load Balancer is achieved by inter-process communication among multiple worker worker processes, that is, when a request arrives, it is easier to distribute it to a lighter worker process for processing. This also improves network performance to some extent and reduces request latency.
Management processes are responsible for monitoring the status of work processes and managing their behavior
The administrative process does not consume much system resources; it is simply used to start, stop, monitor, or otherwise control work processes. First, it improves the reliability of the system. When a worker process fails, the management process can start a new worker process to avoid a performance degradation. Secondly, the management process supports program upgrades and configuration item modifications during Nginx service operation, which makes dynamic scalability and dynamic customization easier to achieve.
Memory pool design
In order to avoid memory fragmentation, reduce the number of memory applications to the operating system, and reduce the development complexity of each module, Nginx has designed a simple memory pool, which is mainly used to integrate multiple memory applications to the system into one operation, which greatly reduces CPU resource consumption and reduces memory fragmentation.
Therefore, there is usually a simple independent memory pool for each request (e.g., a memory pool allocated for each TCP connection), and at the end of the request, the entire memory pool is destroyed, returning the memory allocated to the operating system at once. This design greatly improves the simplicity of module development, because the module does not have to worry about releasing memory after it has requested it, and because the number of times memory is allocated reduces the latency of request execution. At the same time, by reducing memory fragmentation, the effective utilization of memory and the number of concurrent connections that the system can handle are improved, thus enhancing network performance.
"Nginx core architecture is how to support high concurrency" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.