Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Several load balancing algorithms and configurations of Nginx

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "several load balancing algorithms and configurations of Nginx". In daily operation, I believe many people have doubts about several load balancing algorithms and configuration of Nginx. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "several load balancing algorithms and configurations of Nginx". Next, please follow the editor to study!

Nginx load balancer (working in layer 7 "application layer") is mainly implemented through the upstream module. Nginx load balancer has the ability to check the health of backend servers by default, which is limited to port detection. When there are few backend servers, the load balancer capability is outstanding.

Several load balancing algorithms of Nginx:

1. Polling (default): each request is assigned to a different backend server one by one in chronological order. If a backend server goes down, the faulty machine is automatically eliminated, so that user access is not affected.

2. Weight: specify the polling weight. The higher the weight value, the higher the probability of allocation, which is mainly used in the case of uneven performance of each server in the backend.

3. Ip_hash: each request is allocated according to the hash result of accessing IP, so that each visitor accesses a back-end server regularly, which can effectively solve the session sharing problem of dynamic web pages.

4. Fair (third party): a more intelligent load balancing algorithm, which can intelligently balance the load according to the page size and loading time, that is, assign requests according to the response time of the back-end server, and give priority to the short response time. If you want to use this scheduling algorithm, you need the upstream_fair module of Nginx.

5. Url_hash (third party): allocate requests according to the hash result of accessing URL, so that each URL is directed to the same backend server, which can further improve the efficiency of the backend cache server. If you want to use this scheduling algorithm, you need Nginx's hash package.

In the upstream module, you can specify the IP address and port of the backend server through the server command. At the same time, you can also set the status of each backend server in load balancer scheduling. The common states are as follows:

1. Down: indicates that the current server does not participate in load balancer for the time being.

2. Backup: a reserved backup machine that requests the backup machine only when all other non-backup machines are malfunctioning or busy. This machine has the least access pressure.

3. Max_fails: the number of failed requests is allowed. The default is 1, which is used with fail_timeout.

4. Fail_timeout: after max_fails failures, the time for suspending the service defaults to 10s (if a server connection fails max_fails times, nginx will think that the server is not working. At the same time, during the following fail_timeout time, nginx no longer distributes requests to invalid server. )

The following is an example of the configuration of load balancer. Only the http configuration segment is listed here, and other configurations are omitted:

Http {upstream whsirserver {server 192.168.0.120 server 80 weight=5 max_fails=3 fail_timeout=20s;server 192.168.0.121 data/www;location 80 weight=1 max_fails=3 fail_timeout=20s;server 192.168.0.122 http://whsirserver; 80 weight=3 max_fails=3 fail_timeout=20s;server 192.168.0.123 data/www;location 80 weight=4 max_fails=3 fail_timeout=20s;} server {listen 80 blog.whsir.com;index index.html index.htm;root / data/www;location / {proxy_pass http://whsirserver; Proxy_next_upstream http_500 http_502 error timeout invalid_header;}

At the beginning of upstream load balancer, the name of a load balancer is specified as whsirserver through upstream. This name can be defined by itself and can be called directly by proxy_pass later.

The proxy_next_upstream parameter is used to define the failover policy. When the back-end server node returns errors such as 500,502 and execution timeout, the request is automatically forwarded to another server in the upstream load balancer to achieve failover.

At this point, the study of "several load balancing algorithms and configurations for Nginx" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report