In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of "how to use nginx for load balancing". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to use nginx for load balancing" can help you solve the problem.
Layer 4 load balancer vs layer 7 load balancer
It is often said that layer-7 load balancer or layer-4 load balancer is actually determined by the name of the layer of iso's osi network model. Nginx is called layer-7 load balancer because it uses http protocol to perform load balancing at the application layer. The load balancing operations such as lvs in the tcp layer are called four-layer load balancing. Generally speaking, load balancers are classified with the following layers:
Support for common softwar
Common load balancing algorithms
There are several common algorithms for load balancing:
Load balancing demonstration example: general polling
Next, use nginx to demonstrate how to do normal polling:
Prepare beforehand
In advance, start two services on the two ports of 7001 docker 7002 to display different information. For convenience of demonstration, use tornado to make a mirror image. Different parameters passed when starting through the docker container are used to display different services.
[root@kong] # docker run-d-p 7001 liumiaocn/tornado:latest python / usr/local/bin/daemon.py "user service 1: 7001" ddba0abd24524d270a782c3fab907f6a35c0ce514eec3159357bded09022ee57 [root @ kong ~] # docker run-d-p 7002VR 8080 liumiaocn/tornado:latest python / usr/local/bin/daemon.py "user service 1: 7002" 95deadd795e19f675891bfcd44e5e622c95615a956dfd346351eca707951 [root @ kong ~] # [root@kong ~] # curl http://192.168.163.117:7001hello, Service: user service 1: 7001 [root@kong] # [root@kong] # curl http://192.168.163.117:7002hello, service: user service 1: 7002 [root@kong] #
Start nginx
[root@kong ~] # docker run-p 9080 nginx 9d53c7e9a45ef93e7848eb3f4e51c2652a49681e83bda6337c89a3cf2f379c74 80-- name nginx-lb-d nginx 9d53c7e9a45ef93e7848eb3f4e51c2652a49681e83bda6337c89a3cf2f379c74 [root@kong ~] # docker ps | grep nginx-lb9d53c7e9a45e nginx "nginx- g 'daemon..." 11 seconds ago up 10 seconds 0.0.0.0 seconds 9080-> 80/tcp nginx-lb [root@kong ~] #
Nginx code snippet
Prepare the following nginx code snippet to add it to / etc/nginx/conf.d/default.conf in nginx
Http {upstream nginx_lb {server 192.168.163.117 upstream nginx_lb 7001; server 192.168.163.117 upstream nginx_lb 7002;} server {listen 80; server_name www.liumiao.cn 192.168.163.117; location / {proxy_pass http://nginx_lb;}}
The method of modifying default.conf
You can achieve this by installing vim in the container, or you can modify it locally and pass it in via docker cp, or you can modify it directly by sed. If you install vim in the container, you can use the following
[root@kong ~] # docker exec-it nginx-lb sh# apt-get update... Omit # apt-get install vim... Omit
Before modification
# cat default.confserver {listen 80; server_name localhost; # charset koi8-r; # access_log / var/log/nginx/host.access.log main; location / {root / usr/share/nginx/html; index index.html index.htm;} # error_page 404 / 404.html; # redirect server error pages to the static page / 50x.html # error_page 500 502 503 504 / 50x.html Location = / 50x.html {root / usr/share/nginx/html;} # proxy the php scripts to apache listening on 127.0.0.1 proxy the php scripts to apache listening on 80 # # location ~. Php$ {# proxy_pass http://127.0.0.1; #} # pass the php scripts to fastcgi server listening on 127.0.0.1 php$ {# root html; # fastcgi_pass 127.0.0.1 # fastcgi_index index.php; # fastcgi_param script_filename / scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all; #} #
After modification
# cat default.confupstream nginx_lb {server 192.168.163.117 server 7001; server 192.168.163.117 server 7002;} server {listen 80; server_name www.liumiao.cn 192.168.163.117; # charset koi8-r; # access_log / var/log/nginx/host.access.log main; location / {# root / usr/share/nginx/html; # index index.html index.htm; proxy_pass http://nginx_lb; } # error_page 404 / 404.html; # redirect server error pages to the static page / 50x.html # error_page 500502 503504 / 50x.html; location = / 50x.html {root / usr/share/nginx/html;} # proxy the php scripts to apache listening on 127.0.1 location 80 # location ~. Php$ {# proxy_pass HTML #} # pass the php scripts to fastcgi server listening on 127.0.0.1 pass the php scripts to fastcgi server listening on 9000 # # location ~\. Php$ {# root html; # fastcgi_pass 127.0.0.1 php$ 9000; # fastcgi_index index.php; # fastcgi_param script_filename / scripts$fastcgi_script_name; # include fastcgi_params #} # deny access to .htaccess files, if apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all; #} #
Restart the nginx container
[root@kong ~] # docker restart nginx-lbnginx-lb [root@kong ~] #
Confirm the result
You can clearly see that polling is done in order:
[root@kong ~] # curl
Hello, service: user service 1: 7001
[root@kong ~] # curl
Hello, service: user service 1: 7002
[root@kong ~] # curl
Hello, service: user service 1: 7001
[root@kong ~] # curl
Hello, service: user service 1: 7002
[root@kong ~] #
Load balancing demonstration example: weight polling
On this basis, weight polling only needs to add weight.
Modify default.conf
Modify the default.conf as follows
# cp default.conf default.conf.org# vi default.conf# diff default.conf default.conf.org2,3c2,3
< server 192.168.163.117:7001 weight=100;< server 192.168.163.117:7002 weight=200;--->Server 192.168.163.117 server 7001; > server 192.168.163.117VR 7002
Restart the nginx container
[root@kong ~] # docker restart nginx-lbnginx-lb [root@kong ~] #
Confirm the result
You can see that the polling results are carried out according to the proportion of 1Compact 3 and 2Compact 3:
[root@kong ~] # curl
Hello, service: user service 1: 7001
[root@kong ~] # curl
Hello, service: user service 1: 7002
[root@kong ~] # curl
Hello, service: user service 1: 7002
[root@kong ~] #
This is the end of the content on "how to use nginx for load balancing". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.