In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to achieve the functions of Nginx forward and backward proxy and load balancing. The editor finds it very practical, so I share it with you for reference. I hope you can get something after reading this article.
System environment:
VirtualBox Manager
Centos6.4
Nginx1.10.0
The machine name corresponding to IP:
IP machine name role name
10.0.0.139 [elk] client
10.0.0.136 [lvs-master] nginx server
10.0.0.137 [kvm] web server 1
10.0.0.111 [lvs-backup] web server 2
First, forward agency
1.1 introduction to the environment
1.2 configuration introduction
Nginx server: (internal network address: 10.0.0.136, public network address: 172.16.27.64)
Use VirtualBox Manager to virtualize the dual network card.
[root@lvs-master conf.d] # ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:30:56:99 inet addr:10.0.0.136 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::a00:27ff:fe30:5699/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:891978 errors:0 dropped:0 overruns:0 frame:0 TX packets:9509 errors:0 dropped : 0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:81841095 (78.0 MiB) TX bytes:13339058 (12.7 MiB) eth2 Link encap:Ethernet HWaddr 08:00:27:55:4C:72 inet addr:172.16.27.64 Bcast:172.16.27.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe55:4c72/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:913671 errors:0 dropped:0 overruns:0 frame:0 TX packets:22712 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:109369858 (104.3 MiB) TX bytes:1903855 (1.8 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: 1Accord 128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:36222 errors:0 dropped:0 overruns:0 frame:0 TX packets:36222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3899937 (3.7MiB) TX bytes:3899937 (3.7MiB) [root@lvs-master conf.d] # cat zxproxy.conf server {listen 80 # the content address of the listening port server_name 10.0.0.136; # server, which requires network interconnection with client resolver 172.16.5.1; # DNS, this is DNS, access to the public network location / {proxy_pass http://$http_host$request_uri; # $http_host and $request_uri are nginx system variables, do not need to be replaced, leave as is}
Nginx client:
There is only one intranet card, which accesses internet by visiting Nginx server. In fact, this is the principle of climbing over the wall, broiler, and so on.
[root@kvm ~] # ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:72:8C:3B inet addr:10.0.0.137 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::a00:27ff:fe72:8c3b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1462448 errors:0 dropped:0 overruns:0 frame:0 TX packets:21130 errors:0 dropped:0 overruns: 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:145119904 (138.3 MiB) TX bytes:2814635 (2.6MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: 1swap 128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:60800 errors:0 dropped:0 overruns:0 frame:0 TX packets:60800 errors:0 dropped:0 overruns: 0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4831102 (4.6MiB) TX bytes:4831102 (4.6MiB) [root@kvm ~] # wget www.baidu.com-- 2016-06-08 13RU 02MU 08Murray-http://www.baidu.com/ is parsing the host www.baidu.com... Failed: domain name resolution failed temporarily. # unable to access Baidu wget: unable to resolve host address "www.baidu.com" [root@kvm ~] # export http_proxy= http://10.0.0.136:80 # set environment variables The ip and port [root@kvm ~] # wget www.baidu.com # of the specified proxy server can successfully access Baidu-- 2016-06-08 13 root@kvm-08Rose 15-http://www.baidu.com/ is connecting to 10.0.0.136-80. Connected. Proxy request issued, waiting for response. 200 OK length: unspecified [text/html] is being saved to: "index.html.1" [] 99762-K in s in 0.07s 2016-06-08 13:08:16 (1.36MB/s)-"index.html.1" has been saved [99762]
Second, reverse agency
The introduction article is the same as the forward agent
2.1 introduction to the environment
1. Take a look at the test page:
[root@kvm ~] # yum install httpd [root@kvm ~] # echo "10.0.0.137" > / var/www/html/index.html [root@lvs-backup~] # yum install httpd [root@lvs-backup~] # echo "10.0.0.111" > / var/www/html/index.html
two。 Take a look at the effect:
[root@lvs-backup html] # curl 10.0.0.111 10.0.0.111 [root@lvs-backup html] # curl 10.0.0.137 10.0.0.137 # are all successful. Let's proceed to the next step.
2.2 configuration introduction
Make a copy of the configuration file zxproxy.conf [root@lvs-master conf.d] # cp zxproxy.conf fxproxy.conf # in the [root@lvs-master conf.d] # ls # nginx directory, which used to be a forward agent and is now a reverse agent [root@lvs-master conf.d] # mv zxproxy.conf zxproxy.conf.bak [root@lvs-master conf.d] # cat fxproxy.conf server {listen 80; server_name 10.0.0.136 # according to the environment, nginx server ip location / {proxy_pass http://10.0.0.137; # ip} # proxy_pass: proxy_pass URL # default value: NO # use the field: if field in location,location # this parameter sets the address of the proxy server and the mapped URL, the address can make the hostname, domain name, IP plus port mode, such as: # proxy_pass server [root@lvs-master conf.d] # service nginx restart # restart the loading configuration
Take a look at the results:
# Log in to the Clint machine in the experimental environment first The ip is as follows: [root@elk ~] # ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:3D:40:40 inet addr:10.0.0.139 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::a00:27ff:fe3d:4040/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2618345 errors:0 dropped:0 overruns:0 frame:0 TX Packets:247926 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:336182790 (320.6 MiB) TX bytes:35145157 (33.5 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: 1Plus 128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:177352 errors:0 dropped:0 overruns:0 frame:0 TX Packets:177352 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:26547640 (25.3 MiB) TX bytes:26547640 (25.3 MiB) [root@elk ~] # curl 10.0.0.136 # access reverse proxy server 10.0.0.137 # We see the interview proxy server The result was forwarded to web server1. # next, let's take a look at the logs of nginx-server and web-server1: nginx-server: [root@lvs-master ~] # tail / var/log/nginx/access.log 10.0.0.139-[08/Jun/2016:15:35:43 + 0800] "GET / HTTP/1.1" 20026 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19. 1 Basic ECC zlib/1.2.3 libidn/1.18 libssh3/1.4.2 "-" web-server: [root@kvm httpd] # tail / var/log/httpd/access_log 10.0.0.136-[08/Jun/2016:15:21:12 + 0800] "GET / HTTP/1.0" 20026 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS / 3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh3/1.4.2 "# # We see the log of nginx on nginx-server Shows that the user visited is 10.0.0.139, which is the clinet of our environment, # while the ip shown on web-server is 10.0.0.136, that is, nginx-server. # to put it bluntly, nginx-server is the real server to the customer. In fact, when the user accesses nginx-server, the request will be forwarded to # web-server1, and then web-server1 will send the result of the request to nginx-server, and then ngin minor-server will transfer the result of the request to the user. # what I see on web-server is the ip of the agent, can I also see the ip of the real user? [root@lvs-master conf.d] # cat fxproxy.conf server {listen 80; server_name 10.0.0.136; # according to the environment, nginx server ip location / {proxy_pass http://10.0.0.137; # proxied server ip proxy_set_header X-Real-IP $remote_addr # add this line} [root@lvs-master conf.d] # service nginx restart [root@kvm ~] # tail / var/log/httpd/access_log 10.0.0.136-[08/Jun/2016:16:10:53 + 0800] "GET / HTTP/1.0" 20026 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC Zlib/1.2.3 libidn/1.18 libssh3/1.4.2 "# still shows the ip of the proxy server after the change. Let's modify the configuration [root@kvm ~] # vim / etc/httpd/conf/httpd.conf LogFormat "% h% u% t\"% r\ "% > s% b\"% {Referer} I\ "\"% {User-Agent} I\ "" combined LogFormat "% h% u% t\"% r\ "% > s% b" common LogFormat "% {Referer} I->% U" referer LogFormat "% {User-agent} I "agent # modified to: (% h refers to the accessed host LogFormat "% {X-Real-IP} I% u% t\" r\ "% > s% b\"% {Referer} I\ "\"% {User-Agent} I\ "" combined LogFormat "h% l% u% t\"% r\ "% > s% b" common LogFormat "% {Referer} I->% U" referer LogFormat "% {User-agent} I" agent [root@kvm ~] # service httpd Restart stop httpd: [OK] is starting httpd: [root@kvm ~] # tail / var/log/httpd/access_log 10.0.0.136-[08/Jun/2016:16:10:53 + 0800] "GET / HTTP/1.0" 20026 "" curl/7.19.7 (x8634 HTTP/1.0 color Linux- Gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh3/1.4.2 "10.0.0.139-- [08/Jun/2016:16:16:01 + 0800]" GET / HTTP/1.0 "20026"-"" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2. 3 libidn/1.18 libssh3/1.4.2 "# has become the real access address
Proxy multiple web servers:
[root@lvs-master conf.d] # cat fxproxy.conf server {listen 80; server_name 10.0.0.136; location / {proxy_pass http://10.0.0.137; proxy_set_header X-Real-IP $remote_addr;} location / web2 {# add more location proxy_pass http://10.0.0.111; Proxy_set_header X-Real-IP $remote_addr } [root@lvs-backup ~] # cd / var/www/html/ # enter 10.0.0.111 this web-server2 [root@lvs-backup html] # mkdir web [root@lvs-backup html] # echo "10.0.0.111" > index.html # Let's go to client to try: [root@elk ~] # curl 10.0.0.136/web2/ 10.0.0.111 # access succeeded
III. Load balancing
There are many ways to achieve load balancing, commonly used lvs four-tier load balancer, nginx is a seven-tier load balancer, you can query relevant information on the Internet.
3.1 introduction to the environment
3.2 configuration introduction
1.upstream is the HTTP Upstream module of Nginx, which uses a simple scheduling algorithm to achieve load balancing from the client IP to the back-end server. In the above settings, the name of a load balancer is specified by the upstream directive 1.2.3.4. This name can be specified at will and can be called directly where it is needed later.
2.Nginx 's load balancing module currently supports four scheduling algorithms, which are described below, of which the latter two belong to third-party scheduling algorithms.
Polling (default). Each request is allocated to different backend servers one by one in chronological order. If a backend server goes down, the faulty system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server in the backend.
Ip_hash . Each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the problem of session sharing in dynamic web pages.
Fair . This is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.
Url_hash . This method distributes requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package.
Status parameters supported by 3.upstream
In the HTTP Upstream module, you can specify the IP address and port of the back-end server through the server instruction, and you can also set the status of each back-end server in load balancing scheduling. Common states are:
Down, which means that the current server does not participate in load balancing for the time being.
Backup, reserved backup machine. The backup machine is requested only when all other non-backup machines are malfunctioning or busy, so this machine is the least stressful.
Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout.
Note: when the load scheduling algorithm is ip_hash, the status of the back-end server in load balancing scheduling cannot be weight and backup.
Let's look at the specific configuration:
[root@lvs-master conf.d] # cat.. / nginx.conf http {include / etc/nginx/mime.types; default_type application/octet-stream; log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log / var/log/nginx/access.log main Sendfile on; # tcp_nopush on; keepalive_timeout 65; # gzip on; upstream 1.2.3.4 {server 10.0.0.111 tcp_nopush on; keepalive_timeout 80; server 10.0.0.137 tcp_nopush on; keepalive_timeout 80;} include / etc/nginx/conf.d/*.conf;} [root@lvs-master conf.d] # cat slb.confserver {location / {proxy_pass http://1.2.3.4; Proxy_set_header X-Real-IP $remote_addr;} # Note, upstream is defined outside server {} and cannot be defined inside server {}. Once the upstream is defined, it can be referenced with proxy_pass.
4. Test result
[root@elk ~] # curl 10.0.0.136 10.0.0.111 [root@elk ~] # curl 10.0.0.136 10.0.137 [root@elk ~] # curl 10.0.0.136 10.0.0.111 # the result shows that server1,2 appears alternately, indicating that the default is polling load balancer.
5. Health examination
General health check-up requires a keepalived, but nginx also has corresponding parameters that can be set.
Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout for a health check.
[root@lvs-master conf.d] # cat.. / nginx.conf http {include / etc/nginx/mime.types; default_type application/octet-stream; log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log / var/log/nginx/access.log main Sendfile on; # tcp_nopush on; keepalive_timeout 65; # gzip on; upstream 1.2.3.4 {server 10.0.0.111 gzip on; upstream 80 weight=1 max_fails=2 fail_timeout=2; server 10.0.0.137 gzip on; upstream 80 weight=1 max_fails=2 fail_timeout=2;} include / etc/nginx/conf.d/*.conf;} [root@lvs-master conf.d] # service nginx restart
6. Test the results.
[root@kvm httpd] # service httpd stop # disable the web-server1 service [root@elk ~] # curl 10.0.0.136 10.0.0.111 [root@elk ~] # curl 10.0.0.136 10.0.0.111 # now you can only access web-server2. [root@kvm httpd] # service httpd start # Open the web-server1 service [root@elk ~] # curl 10.0.0.136 10.0.0.111 [root@elk ~] # curl 10.0.0.136 10.0.137 [root@elk] # curl 10.0.0.136 10.0.0.111
Load balancing of 7.ip_hash
[root@lvs-master conf.d] # cat.. / nginx.conf upstream 1.2.3.4 {ip_hash; server 10.0.0.111:80 weight=1 max_fails=2 fail_timeout=2; server 10.0.0.137:80 weight=1 max_fails=2 fail_timeout=2 } [root@lvs-master conf.d] # service nginx restart stop nginx: [OK] starting nginx: [OK] # curl 10.0.0.136 10.0.137 [root@elk ~] # curl 10.0.0.136 10.0.0.137 # configure this load balancer > each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server. # effectively solves the session sharing problem of dynamic web pages. (generally used by e-commerce sites) on "Nginx forward and backward agent and load balancing and other functions how to achieve" this article is shared here, I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.