In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to configure nginx load balancer". In daily operation, I believe many people have doubts about how to configure nginx load balancer. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to configure nginx load balancer". Next, please follow the editor to study!
Polling
Nginx distributes all requests evenly to each server in the cluster.
Upstream test {server 127.0.0.1 weight=1;server 7001; # is equivalent to server 127.0.0.1 weight=1;server 150.109.118.85; # is equivalent to server 150.109.118.85, 7001 weight=1;} server {listen 8081 localhost;location / {proxy_pass http://test/;}}
Upstream: define a service cluster. Proxy_pass: forward the matching request proxy to the service configured after the proxy_pass. Here, because you need to configure the load balancer, the http:// must keep up with the service cluster defined by upstream.
Note: when upstream defines a service cluster, the configured service address can only be domain name + port or ip+ port, without protocol and path, otherwise nginx will report the error message nginx: [emerg] invalid host in upstream.
Weighted (weight) upstream test {server 127.0.0.1 weight=2;server 150.109.118.85 server 7001 weight=1;}
The first two requests will be forwarded to the service 127.0.0.1, the last request will be forwarded to the service 150.109.118.85, and the next two requests will be forwarded to 127.0.0.1.
Minimum number of connections
File location: src/http/modules/ngx_http_upstream_least_conn_module.c
The nginx request is assigned to the server with the smallest active_connection/weight.
Upstream test {least_conn;server 127.0.0.1:7001 weight=1;server 150.109.118.85:7001 weight=1;}
Ip_hash
File location: src/http/modules/ngx_http_upstream_ip_hash_module.c
According to the user's ip, a hash value is calculated. If there is a server corresponding to this hash in the load balancer cache, it will be forwarded directly to the corresponding server.
Upstream test {ip_hash;server 127.0.0.1 7001 leading server 150.109.118.85 VR 7001;}
After using the ip_hash strategy, nginx will always request the same business service as long as the ip of the user's computer does not change.
Application scenario: when implementing the file upload function, to upload a large file, the large file is often divided into multiple fragments and then uploaded to the server. If you use the strategy given earlier, fragments of the same file will be uploaded to different servers, resulting in file merging failure, unable to achieve the desired results. After nginx uses the ip_hash policy, the client only needs to upload a fragment of the current file. When the subsequent file fragment is uploaded, nginx automatically forwards the request to the server corresponding to the hash by calculating the hash of the ip.
Hash
File location: src/http/modules/ngx_http_upstream_hash_module.c
Hash calculation can be performed by remote_addr (client ip) (it seems that ip_hash can be replaced directly from the test results), request_uri (request uri) and args (request parameters). The following is mainly demonstrated by the use of request_uri, while the other two are similar.
Calculate a hash value based on the uri of the request, and then forward the request to a server. After the subsequent request is calculated through hash, if there is the same hash, the request will be forwarded to the server corresponding to the hash.
What happens if a server in the cluster goes down: suppose R1 hits a server; R2 hits b server. When server a goes down, the correspondence between hash and server a calculated by R1 will fail, and R1 will be reassigned to server b. After the subsequent a server returns to normal, R1 will still be assigned to b server.
Upstream test {hash $request_uri;server 127.0.0.1VR 7001There server 150.109.118.85VR 7001;}
Application scenario: all requests for the same file resources will be forwarded to the same server, resources are more likely to hit the cache, reducing broadband and resource download time.
Consistent_hash
Consistent_hash (conformance hash) is used in almost the same way as nginx's built-in hash module. The content that can be calculated using consistent_hash is the same as the nginx built-in hash module mentioned earlier, including remote_addr, request_uri, and args. This is a tripartite module, which can be downloaded here at ngx_http_consistent_hash.
Upstream test {consistent_hash $request_uri;server 127.0.0.1 request_uri;server 7001 potential server 150.109.118.85 VR 7001;} fair
Service priority allocation request with short response time. This is a tripartite module, you can download the module here at nginx_upstream_fair. This module was last updated 8 years ago, and you may need to consider whether you need to use this.
Upstream test {fair;server 127.0.0.1 7001 leading server 150.109.118.85 VR 7001;}
The result of the test is the same as the default effect of polling, and the problem has not been found for the time being.
Parameters related to load balancing
Down
The server that identifies down does not support resource requests for the time being.
Upstream test {server 127.0.0.1 down;server 7001 down;server 150.109.118.85 VR 7001;}
In the example of load balancer above, since 127.0.0.1 down 7001 is identified as a load balancer, no requests will be forwarded to this service, and all requests will be forwarded to 150.109.118.85 7001.
Weight
The weight value of the service in the cluster. The default is 1. Under the influence of only weight, and all the services in the cluster are normal, nginx will forward more requests to weight larger services.
Upstream test {server 127.0.0.1:7001 weight=2;server 150.109.118.85:7001 weight=1;}
The proportion of requests processed by 127services and 150services in this cluster is 2:1.
Max_fails
Allows the number of service errors when the service processes the request, which defaults to 1. When the number of errors in the service processing request exceeds that of the max_fails, the subsequent request will not be forwarded to the service where the error occurred.
Upstream test {server 127.0.0.1 max_fail=1;server 7001 max_fail=1;server 150.109.118.85 VR 7001;}
Fail_timeout
When the service processes requests with more errors than max_fails, nginx temporarily forbids forwarding requests to the service. After the time set by fail_timeout in the past, nginx will try to forward the request to the service that was just prohibited. If the service is normal, subsequent requests can continue to be forwarded to this service. If the service is incorrect, wait for the fail_timeout time before testing. The default time for fail_timeout is 10s.
Upstream test {server 127.0.0.1 max_fail=1 fail_timeout=10s;server 7001 max_fail=1 fail_timeout=10s;server 150.109.118.85 VR 7001;}
Backup
Standby server, when all non-backup services are incorrectly deactivated or set to down, nginx enables the service identified as backup.
Upstream test {server 127.0.0.1 backup;server 7001 backup;server 150.109.118.85 VR 7001;}
Max_conns
This feature exists in nginx Business Edition. The number of requests processed simultaneously by the same service. Prevent the service from downtime due to too many requests and insufficient server performance.
Upstream test {server 127.0.0.1 max_conns=10000;server 7001 max_conns=10000;server 150.109.118.85 VR 7001;}
Slow_start
This feature exists in nginx Business Edition. When the wrong service in the cluster waits for fail_timeout time, nginx detects that the service can be used normally, and then waits for slow_start time before officially using the service.
Upstream test {server 127.0.0.1 slow_start=30s;server 150.109.118.85 slow_start=30s;server 7001;} at this point, the study on "how to configure nginx load balancer" is over. I hope to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.