In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following brings you what the nignx load balancing algorithm is like. I hope it can bring some help to you in practical application. Load balancing involves many things, and there are not many theories. There are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
This article mainly shares the nignx load balancing algorithm with you, hoping to help you.
1. Nginx load balancing algorithm
1. Polling (default)
Each request is assigned to different backend services one by one in chronological order. If a backend CVM crashes, the faulty system is automatically eliminated, so that user access is not affected.
2. Weight (polling weight)
The higher the value of weight, the higher the access probability, which is mainly used in the case of uneven performance of each server in the backend. Or just to set different weights in the case of master and slave to achieve reasonable and effective use of host resources.
3. Ip_hash source address hash method
The idea of source address hashing is that according to the IP address of the client, a value calculated by the hash function is used to modulo the size of the server list, and the result is the serial number of the server to be accessed by the customer server. The source address hash method is used for load balancing. Clients with the same IP address will be mapped to the same backend server for access every time when the list of backend servers remains unchanged.
4 、 fair
More intelligent load balancing algorithm than weight and ip_hash, fair algorithm can intelligently balance the load according to the page size and loading time, that is, allocate requests according to the response time of the back-end server, and give priority to the short response time. Nginx itself does not support fair, and if you need this scheduling algorithm, you must install the upstream_fair module.
5 、 url_hash
Distribute requests according to the hash results of the accessed URL, so that each URL is directed to a back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need this scheduling algorithm, you must install the hash package for Nginx.
1. Polling (default)
Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.
II. Weight
Specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the backend server.
For example:
Upstream bakend {server 192.168.0.14 weight=10; server 192.168.0.15 weight=10;}
III. Ip_hash
Each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server on a regular basis, which can solve the session problem.
For example:
Upstream bakend {ip_hash; server 192.168.0.14 88; server 192.168.0.15 purl 80;}
4. Fair (third party)
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time.
Upstream backend {server server1; server server2; fair;}
5. Url_hash (third party)
Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.
Example: add hash statement to upstream. Other parameters such as weight cannot be written in server statement. Hash_method is the hash algorithm used.
Upstream backend {server squid1:3128; / / 10.0.0.10:7777server squid2:3128; / / 10.0.0.11:8888hash $request_uri; hash_method crc32;}
Second, Nginx load balancing scheduling status
In the Nginx upstream module, you can set the status of each backend server in load balancing scheduling. Commonly used states are:
1. Down, which means that the current server does not participate in load balancing for the time being.
2. Backup, reserved backup machine. The backup machine is requested only when all other non-backup machines are down or busy, so the access pressure on this machine is the lowest.
3. Max_fails. The number of failed requests is allowed. The default is 1. When the maximum number is exceeded, the error defined by the proxy_next_upstream module is returned.
4. Fail_timeout, the timeout for request failure, and the time for suspending service after max_fails failures. Max_fails and fail_timeout can be used together.
If Nginx didn't have just one server, it wouldn't be as popular as it is today. Nginx can be configured to proxy multiple servers and keep the system available when one server goes down. The specific configuration process is as follows:
1. Under the http node, add the upstream node.
Upstream linuxidc {server 10.0.6.108 server 7080; server 10.0.0.85 VR 8980;}
two。 Configure proxy_pass in the location node under the server node to: http:// + upstream name, that is,
Http://linuxidc".
Location / {root html; index index.html index.htm; proxy_pass http://linuxidc;}
3. Now the load balancing is initially completed. Upstream carries out the load according to polling (default). Each request is allocated to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated. Although this method is simple and cheap. But the disadvantages are: low reliability and uneven load distribution. Suitable for picture server cluster and pure static page server cluster.
In addition, upstream has other allocation strategies, which are as follows:
Weight (weight)
Specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the backend server. As shown below, the access ratio of 10.0.0.88 is twice as high as that of 10.0.0.77.
Upstream linuxidc {server 10.0.0.77 weight=5; server 10.0.0.88 weight=10;}
Ip_hash (visit ip)
Each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server on a regular basis, which can solve the session problem.
Upstream favresin {ip_hash; server 10.0.0.10 server 8080; server 10.0.0.11 VR 8080;}
Fair (third party)
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time. Similar to the weight allocation policy.
Upstream favresin {server 10.0.0.10 server 8080; server 10.0.0.11 VR 8080; fair;}
Url_hash (third party)
Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.
Note: add hash statement to upstream. Other parameters such as weight cannot be written in server statement. Hash_method is the hash algorithm used.
Upstream resinserver {server 10.0.0.10 server 7777; server 10.0.0.11 purl 8888; hash $request_uri; hash_method crc32;}
Upstream can also set status values for each device, which have the following meanings:
Down indicates that the server before the order does not participate in the load for the time being.
The default weight is that the larger the 1.weight, the greater the weight of the load.
Max_fails: the number of requests allowed to fail defaults to 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.
Fail_timeout: the time to pause after a max_fails failure.
Backup: all other non-backup machines down or request backup machines when they are busy. So this machine will be the least stressed.
Upstream bakend {# defines the Ip and device status of load balancing devices
Ip_hash; server 10.0.0.11 weight=2; server 8080 weight=2; server 10.0.0.11 weight=2; server 6060; server 10.0.0.11 weight=2; server 7070 backup;}
Related recommendations:
Sharing of several load balancing technologies
Several ways to realize load balancing with Nginx
Nginx reverse proxy and load balancing practice
These are the details of nignx load balancing algorithm sharing, please pay attention to other related articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.