In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. Load balancing of Nginx
In the server cluster, Nginx plays the role of a proxy server (that is, reverse proxy). In order to avoid excessive pressure on a single server, it forwards requests from users to different servers. For more information, please see my other blog.
II. Nginx load balancing strategy
Load balancing is used to select a server from the list of back-end servers defined by the "upstream" module to accept users' requests. One of the most basic upstream modules looks like this. The server within the module is a list of servers:
# dynamic server group upstream dynamic_zuoyu {server localhost:8080; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082; # tomcat 8.5 server localhost:8083; # tomcat 9.0}
After the upstream module is configured, you want the specified access to reverse proxy to the server list:
# other pages reverse proxy to tomcat container location ~. * ${index index.jsp index.html; proxy_pass http://dynamic_zuoyu;}
This is the most basic load balancer instance, but it is not enough to meet the actual needs. Currently, the upstream module of Nginx server supports six allocation methods:
Load balancing strategy
Polling default mode weight weight mode ip_hash according to ip distribution mode least_conn minimum connection mode fair (third party) response time mode url_hash (third party) according to URL allocation mode
Here, only the load balancing strategy that comes with Nginx is described in detail, but not described by the third party.
1. Polling
The most basic configuration method, the above example is polling, which is the default load balancing policy of the upstream module. Each request is assigned to a different back-end server one by one in chronological order.
The parameters are as follows:
Fail_timeout works in conjunction with max_fails. Max_fails sets the maximum number of failures within the time set by the fail_timeout parameter. If all requests for the server fail within this time, the server will be considered to be down, and the fail_time server will be considered to be down. The default is 10 seconds. Backup marks the server as a standby server. When the primary server stops, the request is sent to it. The down marking server is down permanently.
Note:
In polling, if the server down is dropped, the server is automatically deleted. The default configuration is the polling policy. This strategy is suitable for server configuration, stateless and short and fast services.
2 、 weight
Weight mode, which specifies the probability of polling on the basis of the polling policy. Examples are as follows:
# dynamic server group upstream dynamic_zuoyu {server localhost:8080 weight=2; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082 backup; # tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}
In this example, the weight parameter is used to specify the polling probability, and the default value of weight is 1, which is proportional to the access ratio. For example, Tomcat 7.0 is twice as likely to be accessed as other servers.
Note:
The higher the weight, the more requests that need to be processed. This strategy can be used in conjunction with least_conn and ip_hash. This strategy is suitable for situations where the hardware configuration of the server varies greatly.
3 、 ip_hash
Specify that the load balancer is allocated in a client-based IP-based manner, which ensures that requests from the same client are sent to the same server all the time to secure the session session. In this way, each visitor has regular access to a back-end server, which can solve the problem that session cannot cross servers.
# dynamic server group upstream dynamic_zuoyu {ip_hash; # ensures that each guest accesses one back-end server server localhost:8080 weight=2; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082; # tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}
Note:
Prior to nginx version 1.3.1, weights (weight) could not be used in ip_hash. Ip_hash cannot be used with backup. This strategy is suitable for stateful services such as session. When a server needs to be removed, it must be manually down off.
4 、 least_conn
Forward the request to the back-end server with fewer connections. The polling algorithm forwards requests to each backend averagely, so that their load is roughly the same; however, some requests take a long time, which will lead to a high load on the backend. In this case, least_conn can achieve a better load balancing effect.
# dynamic server group upstream dynamic_zuoyu {least_conn; # forwards the request to the back-end server server localhost:8080 weight=2; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082 backup; # tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}
Note:
This load balancing strategy is suitable for server overload caused by different request processing time.
5. Third-party strategy
The implementation of the third-party load balancing strategy requires the installation of third-party plug-ins.
① fair
Requests are allocated according to the response time of the server, and priority is given to those with short response time.
# dynamic server group upstream dynamic_zuoyu {server localhost:8080; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082; # tomcat 8.5 server localhost:8083; # tomcat 9.0 fair; # to achieve priority allocation with short response time}
② url_hash
Assign the request according to the hash result of accessing the url, so that each url is directed to the same backend server, which should be used in conjunction with the cache hit. Multiple requests for the same resource may reach different servers, resulting in unnecessary multiple downloads, low cache hit rates, and waste of resource time. Using url_hash, you can make the same url (that is, the same resource request) reach the same server. Once the resource is cached and the request is received, it can be read from the cache.
# dynamic server group upstream dynamic_zuoyu {hash $request_uri; # to direct each url to the same back-end server server localhost:8080; # tomcat 7.0 server localhost:8081; # tomcat 8.0 server localhost:8082; # tomcat 8.5 server localhost:8083; # tomcat 9.0}
III. Summary
The above are the implementation methods of six load balancing strategies, all of which are implemented by Nginx according to different algorithms except polling and polling weights. In practical application, it needs to be used selectively according to different scenarios, and most of them are combined with a variety of strategies to meet the actual needs. I hope it will be helpful to your study, and I also hope you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.