In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge points of nginx load balancer instance analysis. The content is detailed and the logic is clear. I believe most people still know too much about this, so share this article for your reference. I hope you can get something after reading this article. Let's take a look at it.
Load balancing of nginx
Note, as you can see, because our website is in the early stage of development, nginx only proxies a back-end server, but as our website's popularity soars and more and more people visit a server, we add multiple servers, so how to configure agents for many servers? here we take two servers as an example to demonstrate for you.
Description of 1.upstream load balancing module
Case study:
The following sets the list of servers for load balancing.
Upstream test.net {ip_hash;server 192.168.10.13 max_fails=3 fail_timeout=20s;server 80 down;server 192.168.10.14 http://test.net; 80 max_fails=3 fail_timeout=20s;server 192.168.10.15 max_fails=3 fail_timeout=20s;server 8080;} server {location / {proxy_pass server}
Upstream is the http upstream module of nginx, which uses a simple scheduling algorithm to achieve load balancing from the client ip to the back-end server. In the above settings, the name of a load balancer, test.net, is specified through the upstream instruction. This name can be specified at will and can be called directly where it is needed later.
Load balancing algorithm supported by 2.upstream
Nginx's load balancing module currently supports four scheduling algorithms, which are described below, of which the latter two belong to third-party scheduling algorithms.
Polling (default). Each request is allocated to different backend servers one by one in chronological order. If a backend server goes down, the faulty system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server in the backend.
Ip_hash . Each request is allocated according to the hash result of accessing ip, so that visitors from the same ip regularly access a back-end server, which effectively solves the problem of session sharing in dynamic web pages.
Fair . This is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of nginx.
Url_hash . This method distributes requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install nginx's hash package.
Status parameters supported by 3.upstream
In the http upstream module, you can specify the ip address and port of the back-end server through the server instruction, and you can also set the status of each back-end server in load balancing scheduling. Common states are:
Down, which means that the current server does not participate in load balancing for the time being.
Backup, reserved backup machine. The backup machine is requested only when all other non-backup machines are malfunctioning or busy, so this machine is the least stressful.
Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout.
Note: when the load scheduling algorithm is ip_hash, the status of the back-end server in load balancing scheduling cannot be weight and backup.
4. Experimental topology
5. Configure nginx load balancer
[root@nginx ~] # vim / etc/nginx/nginx.confupstream webservers {server 192.168.18.201 weight=1; server 192.168.18.202 weight=1;} server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main; location / {proxy_pass http://webservers; proxy_set_header x-real-ip $remote_addr;}}
Note that upstream is defined outside server {} and cannot be defined inside server {}. Once the upstream is defined, it can be referenced with proxy_pass.
6. Reload the configuration file
[root@nginx ~] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
7. Test it
Note: you can constantly refresh the browsing content, and you can find that web1 and web2 appear alternately, achieving the effect of load balancing.
8. Check the web access server log
Web1:
[root@web1] # tail / var/log/httpd/access_log192.168.18.138-- [04/sep/2013:09:41:58 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:41:58 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:41:59 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-- [04/sep/2013:09:41:59 + 0800]" get / http/1.0 "20023"-"" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:42:00 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:42:00 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-- [04/sep/2013:09:42:00 + 0800]" get / http/1.0 "20023"-"" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:44:21 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:44:22 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-[04/sep/2013:09:44:22 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)"
Web2:
First, modify the format in which the web server records logs.
[root@web2 ~] # vim / etc/httpd/conf/httpd.conflogformat "% {x-real-ip} I% u% t\" r\ "% > s% b\"% {referer} I\ "\"% {user-agent} I\ "" combined [root@web2 ~] # service httpd restart stop httpd: [OK] starting httpd: [OK]
Then, visit several more times and continue to check the log.
[root@web2] # tail / var/log/httpd/access_log192.168.18.138-- [04/sep/2013:09:50:28 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:50:28 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:50:28 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-- [04/sep/2013:09:50:28 + 0800]" get / http/1.0 "20023"-"" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:50:28 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:50:28 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-- [04/sep/2013:09:50:28 + 0800]" get / http/1.0 "20023"-"" mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64 Trident/6.0) "192.168.18.138-- [04/sep/2013:09:50:28 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)" 192.168.18.138-- [04/sep/2013:09:50:29 + 0800] "get / http/1.0" 20023 "-" mozilla/5.0 (compatible; msie 10.0) Windows nt 6.1; wow64; trident/6.0) "192.168.18.138-[04/sep/2013:09:50:29 + 0800]" get / http/1.0 "20023"-"mozilla/5.0 (compatible; msie 10.0; windows nt 6.1; wow64; trident/6.0)"
Note, as you can see, the logs of both servers are accessed by 192.168.18.138, which also shows that the load balancer is configured successfully.
9. Configure nginx for health check
Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout for a health check.
[root@nginx ~] # vim / etc/nginx/nginx.confupstream webservers {server 192.168.18.201 weight=1 max_fails=2 fail_timeout=2; server 192.168.18.202 weight=1 max_fails=2 fail_timeout=2;}
10. Reload the configuration file
[root@nginx ~] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
11. Stop the server and test
Stop web1 first and test it. [root@web1 ~] # service httpd stop stop httpd: [OK]
Note, as you can see, you can only visit web2 now, restart web1, and visit again.
[root@web1 ~] # service httpd start is starting httpd: [OK]
Note, as you can see, it can now be accessed again, indicating that the health check configuration of nginx is successful. But think about it, what if unfortunately all the servers cannot provide services? when the user opens the page, there will be an error page, which will reduce the user experience, so can we configure sorry_server like configuring lvs? the answer is yes, but this is not configuring sorry_server but configuring backup.
twelve。 Configure the backup server
[root@nginx ~] # vim / etc/nginx/nginx.confserver {listen 8080; server_name localhost; root / data/www/errorpage; index index.html;} upstream webservers {server 192.168.18.201 weight=1 max_fails=2 fail_timeout=2; server 192.168.18.202 weight=1 max_fails=2 fail_timeout=2; server 127.0.0.1 backup } [root@nginx ~] # mkdir-pv / data/www/errorpage [root@nginx errorpage] # cat index.htmlsorry.
13. Reload the configuration file
[root@nginx errorpage] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
14. Shut down the web server and test it
[root@web1 ~] # service httpd stop stop httpd: [OK] [root@web2 ~] # service httpd stop stop httpd: [OK]
Note, as you can see, when all the servers are not working, the backup server is started. All right, this is the end of the backup server configuration. Let's configure ip_hash load balancer.
15. Configure ip_hash load balancer
Ip_hash, each request is allocated according to the hash result of accessing ip, so that visitors from the same ip regularly access a back-end server, which effectively solves the problem of session sharing in dynamic web pages. (generally, e-commerce websites are used more)
[root@nginx ~] # vim / etc/nginx/nginx.confupstream webservers {ip_hash; server 192.168.18.201 weight=1 max_fails=2 fail_timeout=2; server 192.168.18.202 weight=1 max_fails=2 fail_timeout=2; # server 127.0.0.1 etc/nginx/nginx.confupstream webservers 8080 backup;}
Note: when the load scheduling algorithm is ip_hash, the status of the backend server in load balancing scheduling cannot have backup. One may ask, why? Everybody thinks, if the load balancer assigns you to the backup server, can you access the page? No, so the backup server cannot be configured)
16. Reload the server
[root@nginx ~] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
17. Test it
Note, as you can see, you constantly refresh the public web2 that will always be displayed on the page, indicating that the ip_hash load balancer is configured successfully. Let's count the number of access connections to web2.
18. Count the number of access connections to web2
[root@web2 ~] # netstat-an | grep: 80 | wc-l304
Note, as you keep refreshing, the number of connections will increase.
These are all the contents of the article "nginx load balance instance Analysis". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.