In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article is about how to achieve clustering and load balancing in Nginx. The editor finds it very practical, so I share it with you. I hope you can get something after reading this article. Let's take a look at it with the editor.
Cluster and load balancing of Nginx
Load balancer configuration case 1: set upstream server: # set upstream server: upstream imgserver {# weight weight maxfails error number fail_timeout timeout. Down means that the server before the load is not participating in the load temporarily. All other non-backup machines down or busy, request the backup machine. By setting ip_hash in upstream, you can choose the same back-end server for clients in the same class C address field, which can solve the session problem. Sticky cookie-based load balancing. Ip_hash; # is not recommended to be used with Sticky; Sticky; # is not recommended to be used with ip_hash; server 192.168.1.100 backup;server 80; weight:2 max_fails=2 fail_timeout=30s;server 192.168.1.101 down; weight:2 max_fails=2 fail_timeout=30s;server 127.0.1 down } downstream call # downstream call: location ~\. (jpg | jpeg | png | gif) {# the address pool proxy_pass imgserver;# forwarded to the upstream server takes the user's IP information along at the same time, otherwise, the IP address obtained by the back-end server will be the IP;proxy_set_header X-Forwarded-For $remote_addr of the proxy server. } load balancer case 2, build load balancer under ip_hashubuntu and prepare at least 3 condition servers to modify configuration files: cd / etc/nginx/conf.d/1. Add a configuration file upstream a.com {server server IP: Port; server 127.0.0.1 IP 80; server 127.0.0.1 IP 8080;} 2 configure virtual host vim / etc/nginx/sites-available/defaultserver {listen 80; server_name a.com; location / {proxy_pass http:// configure yourself (forward to where)} ln-s / etc/nginx/sites-available/ {nidepaizhi} / etc/nginx/sites-enabled/// check the configuration file for errors nginx-t / / restart the server nginx-s reload3 other parameters configuration extension: 1. Polling (default) each request is allocated to the back-end server one by one in chronological order. If the down of the back-end server is dropped, the specified polling probability of 2 weight can be automatically eliminated. The weight is proportional to the access ratio, which is used in the case of uneven performance of the back-end server. Upstream bakend {server server IP: Port probability value; server 127.0.0.1 IP 80 weight=10;} 3 ip_hash each request is allocated according to the hash result of accessing the ip, so that each guest accesses a back-end server regularly, which can solve the session problem upstream resinserver {ip_hash Server 192.168.159.10 server 8080; server 192.168.159.11 server 8080;} 4 location / {proxy_pass http://a.com; # this address must be the name of the load balancer defined above proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;} balancing strategy: the strategy I used before is to distinguish users for load balancing based on cookie values. (store sessionID in cookie and judge through sessionID) use redis to save users' login information directly through redis. Nginx can load balance according to client IP. By setting ip_hash in upstream, you can choose the same backend server for clients in the same class C address range. Unless that backend server goes down, it will be replaced, which can solve the session problem. Sticky is based on cookie load balancing. My current equilibrium strategy: it is not suitable for Sticky or ip_hash to directly use weight rotation training, session synchronization and so on do not have to worry about. Now session is stored in Redis.
There are five balanced distribution methods for Baidu reprint: nginx can carry out load balancing according to client IP. By setting ip_hash in upstream, you can select the same backend server for clients in the same class C address range. Unless that backend server goes down, it will be replaced. Five modes of allocation currently supported by nginx's upstream
1. Polling (default) each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated. Upstream backserver {server 192.168.0.14; server 192.168.0.15;} 2. Specify the weight to specify the polling probability. Weight is proportional to the access ratio, which is used in the case of uneven performance of the back-end server. Upstream backserver {server 192.168.0.14 weight=10; server 192.168.0.15 weight=10;} 3, IP binding ip_hash each request is allocated according to the hash result of accessing the ip, so that each guest accesses a back-end server regularly, which can solve the session problem. Upstream backserver {ip_hash; server 192.168.0.14 88; server 192.168.0.15 fair 80;} 4, the third party allocates requests according to the response time of the back-end server, and priority is given to those with short response time. Upstream backserver {server server1; server server2; fair;} 5 and url_hash (third party) allocate requests according to the hash result of accessing url, so that each url is directed to the same back-end server, which is more effective when the back-end server is cached. Upstream backserver {server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32;} add proxy_pass http://backserver/; # to server where to forward to upstream backserver {ip_hash; server 127.0.0.1 server 9090 down; (down indicates that a single front server is not participating in the load temporarily) server 127.0.0.1 server 8080 weight=2 (the weight defaults to 1.weight, the greater the load weight) server 127.0.0.1 backup; 6060; server 127.0.1 backup; (all other non-backup machines down or request backup machines when busy)} max_fails: the number of times a request is allowed to fail defaults to 1. When the maximum number of times is exceeded and the error fail_timeout:max_fails defined by the proxy_next_upstream module fails, the pause time above is how to achieve clustering and load balancing in Nginx. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.