In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "the introduction of Nginx and the process of using Nginx to achieve load balancing". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "an introduction to Nginx and the process of using Nginx to achieve load balancing".
Preface
Recently, when deploying the project, load balancing is required. It is interesting to find that all online search are similar configuration files as follows.
Upstream localhost {server 127.0.0.1 weight=1; server 127.0.1 weight=1;} server {listen 80; server_name localhost; location / {proxy_pass http://localhost; index index.html index.htm index.jsp;}}
So I'm going to take a look at the inner principles of Nginx. This blog mainly introduces how Nginx implements reverse proxy and how to use load balancing parameters in Nginx.
I. forward proxy and reverse proxy
Forward agent is the proxy client, that is, the client can really access, such as access to the external network needs to use VPN software, in this software users can choose where to connect to the server.
The reverse proxy is the proxy server, which is not perceived by the user, but when the client sends the request to the port of the server, Nginx listens to it and forwards the request from that port to different servers. As explained by the above configuration file, when you enter http://localhost:80/ in the URL (enter port 80 by default if you don't add the same as 80, for clarity here), then Nginx listens to the request on port 80 and looks for the corresponding location to execute. From the configuration file above, we can see that the request was forwarded to a different port. This is performed on the server and is not visible to the user.
The reverse proxy tool we most often use on the server side is Nginx.
II. Basic internal structure of Nginx
Nginx runs in the background as daemon after startup, with one master process and multiple worker processes.
Master process: mainly used to manage the worker process, including: receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, when the worker process exits (under abnormal circumstances), it will automatically restart the new worker process.
Worker process: deal with basic network events. Multiple worker processes are peer-to-peer, they compete equally for requests from the client, and the processes are independent of each other. A request can only be processed in one worker process, and it is impossible for a worker process to process requests from other processes. The number of worker processes can be set. Generally, we will set the same number of cores as the machine cpu, or directly set the parameter worker_processes auto.
So the basic architecture of Nginx is as follows:
When we type. / nginx-s reload, it is to restart nginx,./nginx-s stop, it is to stop the operation of nginx. How is this done? When we execute the command, we start a new nginx process, and after parsing to the reload parameter, the new nginx process knows that our purpose is to control nginx to reload the configuration file, which signals to the master process. After receiving the signal, the master process reloads the configuration file, then starts the new worker process, and sends a signal to all the old worker processes that they can retire with honor. After the new worker starts, it starts to receive new requests, while the old worker does not receive new requests after receiving a signal from master, and exits after all outstanding requests in the current process have been processed. So the service is uninterrupted when you restart Nginx using the above command.
3. How does Nginx handle client requests
First of all, let's explain the architecture diagram above: each worker process is branched from the master process. In the master process, the socket to be listened on is first established, and then multiple worker processes are spent separately. The listenfd of all worker processes (listenfd in socket refers to the fd when the client connects to the machine and is used to communicate with the client) will become readable when a new connection arrives. To ensure that only one process handles the connection, all worker processes grab the accept_mutex before registering the listenfd read event, the process that grabs the mutex registers the listenfd read event, and calls accept in the read event to accept the connection.
In Nginx, worker processes are equal, and each process has the same opportunity to process requests. When Nginx listens on port 80, a connection request from a client comes, and it is possible for each process to process the connection. It is said that every worker process will rush to register the listenfd read event. When a worker process begins to read the request, parse the request, process the request, generate data, return it to the client, and finally disconnect the connection after the accept connection, such a complete request is like this. It is important to note that a request is handled entirely by the worker process and only in one worker process.
The following two flow charts can help us understand
4. How to handle events and achieve high concurrency in Nginx
Nginx internally uses an asynchronous non-blocking way to process requests, that is, Nginx can process thousands of requests at the same time.
Asynchronous non-blocking: when a network request comes, we do not rely on the request to do subsequent operations, so the request is an asynchronous operation, that is, the caller can also perform subsequent operations before getting the result. Non-blocking means that the subsequent operation of the process / thread will not be hindered even if the current process / thread does not get the result of the request call. You can see that asynchronous and non-blocking objects are different.
5. Algorithm and parameters of Nginx load balancing.
Round robin (default): polling method, which allocates requests to each server in the background in turn, which is suitable for the case where the performance of the backend machine is consistent. If the server dies, it can be automatically removed from the service list.
Weight: distribute requests to different servers according to weight, which can be understood as proportional distribution. Servers with higher performance divide multiple requests, while lower servers distribute fewer requests.
IP_hash: send the request to the backend server according to the hash value of the requester ip to ensure that the request from the same ip is forwarded to the fixed server to solve the session problem.
Upstream localhost {ip_hash; server 127.0.0.1 server 8080; server 127.0.0.1
These are the three most basic algorithms, and we can also configure load balancing by changing the parameters.
Upstream localhost {ip_hash; server 127.0.0.1 down; server 9090 down; server 127.0.1 weight=2; server 127.0.1 weight=2; server 6060; server 127.0.0.1 down; server 7070}
The list of parameters is as follows:
At this point, I believe you have a deeper understanding of the "introduction to Nginx and the process of using Nginx to achieve load balancing". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.