Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Master the methods and steps of Nginx reverse proxy and load balancing to realize dynamic and static separation

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you to master Nginx reverse proxy and load balancing to achieve dynamic and static separation of actual combat methods and steps, hoping to give you some help in practical application, load balancing involves more things, there are not many theories, there are many books online, today we use the accumulated experience in the industry to do an answer.

Reverse proxy and load balancing of Nginx to achieve dynamic and static separation

What is reverse proxy and load balancing

Nginx is only used as a Nginx proxy reverse proxy, because the effect of this reverse proxy function is the effect of a load balancing cluster.

Load balancer refers to the forwarding of request data. from the node cloud server under the load balancer, the request received is still the real user from the client accessing the load balancer, while the reverse proxy server means that after receiving the user's request, the reverse proxy server will represent the user to re-initiate the node server under the request proxy, and finally return the data to the client user. The client of the node server of the model text uses a reverse proxy server instead of the real website access user.

Introduction to the Core components of Nginx load balancing

Nginx http function module

Module description

Ngx_nginx upstream

The load balancing module can realize the load balancing function of the website and the health check of the node.

Ngx_http_proxy_module

Proxy module, used to throw the request to the server node or upstream server pool

I. Experimental objectives

Practice 1: configure web nodes based on domain name virtual hosts

Practice 2: implement the proxy server to carry the host head and record the user IP

Practice 3: proxy forwarding based on directory address in URL

Practical 4:Nginx load balancing detects node status

Second, the experimental environment

Hostnam

IP address

System

Action

Yu61

192.168.1.61

Rhel-6.5

Main load balancer of nginx

Yu62

192.168.1.62

Rhel-6.5

Nginx slave load balancer

Yu63

192.168.1.63

Rhel-6.5

Web1 server

Yu64

192.168.1.64

Rhel-6.5

Web2 server

Experimental topology

Third, the experimental steps

1. Installation of Nginx-how to install all four hosts

[root@yu61 ~] # service httpd stop

[root@yu61 ~] # service iptables stop

[root@yu61 ~] # yum install pcre pcre-devel openssl openssl-devel-y

[root@yu61] # mkdir-p / home/yu/tools

[root@yu61 ~] # cd / home/yu/tools/

[root@yu61 tools] # wget-Q http://nginx.org/download/nginx-1.6.3.tar.gz

[root@yu61 tools] # ls

Nginx-1.6.3.tar.gz

[root@yu61 tools] # useradd nginx-s / sbin/nologin-M

[root@yu61 tools] # tar xf nginx-1.6.3.tar.gz

[root@yu61 tools] # cd nginx-1.6.3

[root@yu61 nginx-1.6.3] # / configure-- user=nginx-- group=nginx-- prefix=/application/nginx-1.6.3-- with-http_stub_status_module-- with-http_ssl_module

[root@yu61 nginx-1.6.3] # make-j 4 & & make install

[root@yu61 nginx-1.6.3] # ln-s / application/nginx-1.6.3/ / application/nginx

Practical Operation of Nginx load balancing

Practice 1: configure web nodes based on domain name virtual hosts

1. Modify the configuration file-modify on two web servers

[root@yu61 nginx-1.6.3] # cd / application/nginx/conf/

[root@yu61 conf] # egrep-v'# | ^ $'nginx.conf.default > nginx.conf

[root@yu63 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

Server {

Listen 80

Server_name bbs.mobanche.com

Location / {

Root html/bbs

Index index.html index.htm

}

Access_log logs/access.log main

}

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Root html/www

Index index.html index.htm

}

Access_log logs/access.log main

}

}

}

2. Check the syntax and start the nginx service-all four hosts do

[root@yu61 conf] # mkdir / application/nginx/html/ {www,bbs}

[root@yu61 conf] # / application/nginx/sbin/nginx-t

Nginx: the configuration file / application/nginx-1.6.3/conf/nginx.conf syntax is ok

Nginx: configuration file / application/nginx-1.6.3/conf/nginx.conf test is successful

[root@yu61 conf] # / application/nginx/sbin/nginx

[root@yu61 conf] # netstat-anutp | grep nginx

Tcp 0 0 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 of the LISTEN 48817/nginx.

3. Add testable content-both web hosts do it, and you can only do one for testing.

[root@yu63 conf] # echo '192.168.1.63 www' >.. / html/www/index.html

[root@yu63 conf] # echo '192.168.1.63 bbs' >.. / html/bbs/index.html

[root@yu63 conf] # cat / application/nginx/html/www/index.html

192.168.1.63 www

[root@yu63 conf] # cat / application/nginx/html/bbs/index.html

192.168.1.63 bbs

[root@yu64 conf] # echo '192.168.1.64 www' >.. / html/www/index.html

[root@yu64 conf] # echo '192.168.1.64 bbs' >.. / html/bbs/index.html

[root@yu64 conf] # cat / application/nginx/html/www/index.html

192.168.1.64 www

[root@yu64 conf] # cat / application/nginx/html/bbs/index.html

192.168.1.64 bbs

4. Test the web end with curl

[root@yu61 conf] # tail-2 / etc/hosts

192.168.1.63 bbs

192.168.1.63 www

[root@yu61 conf] # scp / etc/hosts / etc/hosts 192.168.1.63:/etc/hosts

[root@yu61 conf] # curl www.mobanche.com

192.168.1.63 www

[root@yu61 conf] # curl bbs.mobanche.com

192.168.1.63 bbs

Practice 2: implement the proxy server to carry the host head and record the user IP

1) introduction of Upstream module

The load balancing function of Nginx depends on the ngx_http_proxy_module module, and the supported proxy methods include proxy_pass (proxy), fastcgi_pass (PHP/JAVA), memcache_pass (cache) and so on.

The ngx_http_proxy_module module allows Nginx to define a group or degree group node server group. When using it, you can send the request of the website to the name of the upstream group corresponding to the defined ID through the pass_pass proxy.

The load balancing module is used to select a host from the list of backend hosts defined by the "upstream" directive. Nginx first uses the load balancing module to find a host, and then uses the upstream module to interact with the host.

2) Upstream module syntax:

Upstream www_server_pools # upstream is a required keyword, and www_server_pools is the name of the upstream cluster group and can be customized.

Server 192.168.1.63 weight=1; # server is a keyword, which is fixed, and can be followed by a domain name or ip address. If a point port is configured, the default is port 80, and weight is the weight. The higher the value, the more requests are assigned, ending with a semicolon.

3) description of Upstream module

The contents of the Upstream module should be placed in the http {} tag, and its default scheduling algorithm is wrr, weight polling.

Description of server tag parameters inside Upstream module

Parameters in Upstream module

Parameter description

Server 192.168.1.63

The node server configuration behind the load balancer can be ip or domain name. If the port number is not written, the default port is 80. In the case of high concurrency, ip can be changed into a domain name and load balancer can be done through DNS.

Weight=1

Represents the weight. The default value is 1. The higher the weight value, the greater the proportion of accepted requests.

Max_fails=1

The number of failed attempts by Nginx to connect to the backend host. This value is used with three parameters: proxy_pass (proxy), fastcgi_pass (PHP/JAVA), and memcache_pass (cache). When Nginx receives the status code defined by these three parameters from the back-end server, it forwards the request to the working back-end server.

Backup

Hot backup, the high availability of real server nodes. When the previously activated RS fails, the hot backup RS will be automatically enabled. This marks the purchase of this server as a backup server. If all the main service areas are down, his request will be forwarded.

Fail_timeout=10s

After the number of failures defined by max_fails, the interval between the next check and the default is 10 seconds. If max_fails=5, he will check 5 times, if 5 this is 502, it will wait 10 seconds according to the value of fail_timeout before checking. Check only once, if it lasts 502. Check the value every 10 seconds without reloading the Nginx configuration.

Down

This indicates that the server is never available, and this parameter can be used with ip_hash.

4) three commonly used scheduling algorithms for upstream module

Nginx's upstream supports five allocation methods. Among them, the first three are Nginx native support, and the last two are third-party support:

(1), rr polling

Polling is the default allocation method of upstream, that is, each request is assigned to a different backend server in turn in chronological order. If a backend server down is dropped, it can be automatically eliminated. The poll is conducted at 1:1.

Upstream backend {

Server 192.168.1.101:88

Server 192.168.1.102:88

}

(2) wrr polling.

The enhanced version of polling, that is, you can specify the polling rate, and the weight is proportional to the access probability. It is mainly used in heterogeneous scenarios of backend servers.

Upstream backend {

Server 192.168.1.101 weight=1

Server 192.168.1.102 weight=2

Server 192.168.1.103 weight=3

}

(3), ip_hash

Each request is allocated according to the hash result of accessing the ip (that is, the front server or client IP of Nginx), so that each guest will regularly access a back-end server, which can solve the problem of session consistency.

Upstream backend {

Ip_hash

Server 192.168.1.101:81

Server 192.168.1.102:82

Server 192.168.1.103:83

}

(4), fair

As the name implies, fair fairly allocates requests according to the response time (rt) of the back-end server, which means that the back-end server with a small rt gives priority to the request.

Upstream backend {

Server 192.168.1.101

Server 192.168.1.102

Fair

}

(5), url_hash

Similar to ip_hash, but allocates requests according to the hash result of accessing url, so that each url is directed to the same back-end server, which is mainly used in scenarios when the back-end server is cached.

Upstream backend {

Server 192.168.1.101

Server 192.168.1.102

Hash $request_uri

Hash_method crc32

}

Note:

_ method is the hash algorithm used. It should be noted that parameters such as weight cannot be added to the server statement. The latter two can be understood at present.

5) parameters of http proxy module

Http proxy module parameters

Parameter description

Proxy_set_header

Allows you to redefine or attach fields to the request header passed to the proxy server. The value can contain text, variables, and their combinations. It can be realized that the node of the proxy backend server can obtain the certificate IP address of the client user.

Proxy_connect_timeout

Specify a timeout to connect to the proxy server in seconds, preferably no more than 75 seconds.

Proxy_body_buffer_size

Used to specify the client request buffer size

Proxy_send_timeout

Indicates the data return time of the proxy back-end server, that is, the back-end server must transmit all the data within a specified time, otherwise, Nginx will disconnect the link

Proxy_read_timeout

Set the time for Nginx to obtain information from the agent's back-end server, indicating that after the link is successfully established, the Nginx waits for the response time of the back-end server, which is actually Nginx and waiting time for processing in the queue at the backend.

Proxy_buffer_size

Set the number and size of buffers, and the response information obtained by Nginx from the proxy's back-end server will prevent the buffer from being reached.

Proxy_buffers

Used to set the number and size of set buffers. Nginx gets response information from the proxy's back-end server, which prevents buffers

Proxy_busy_buffer_size

Used to set the peoxy_buffers size that can be used when the system is busy

Proxy_temp_file_write_size

Specify the size of the peoxy cache temporary file

1. Modify the nginx master-slave server-both load balancer hosts need to be modified

[root@yu61 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Upstream www_server_pools {

Server 192.168.1.63:80 weight=1

Server 192.168.1.64:80 weight=1

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Proxy_pass http://www_server_pools;

}

}

}

2. Test load balancing

[root@yu61 conf] #.. / sbin/nginx-t

Nginx: the configuration file / application/nginx-1.6.3/conf/nginx.conf syntax is ok

Nginx: configuration file / application/nginx-1.6.3/conf/nginx.conf test is successful

[root@yu61 conf] #.. / sbin/nginx-s reload

[root@yu61 conf] # cat / etc/hosts

192.168.1.61 www.mobanche.com

[root@yu61 conf] # curl www.mobanche.com

192.168.1.63 bbs

[root@yu61 conf] # curl www.mobanche.com

192.168.1.64 bbs

[root@yu61 conf] # curl www.mobanche.com

192.168.1.63 bbs

[root@yu61 conf] # curl www.mobanche.com

192.168.1.64 bbs

Note:

By default, the node server is not told to find the virtual machine host at the request header, so the web node server finds that there is no host header information after receiving the request. Therefore, the first virtual machine host of the node server is sent to see the reverse proxy (the first virtual host on the second node is bbs). The solution to this problem is to re-initiate the request after the reverse proxy wants to. Carry the host header information to clearly tell the node which virtual host the server is looking for.

3. Modify configuration file-add proxy server with host header file configuration

[root@yu61 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Upstream www_server_pools {

Server 192.168.1.63:80 weight=1

Server 192.168.1.64:80 weight=1

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Proxy_pass http://www_server_pools;

Proxy_set_header Host $host

}

}

}

4. Use another client to test

[root@yu61 conf] # curl www.mobanche.com

192.168.1.63 www

[root@yu61 conf] # curl www.mobanche.com

192.168.1.64 www

[root@yu62 ~] # curl www.mobanche.com

192.168.1.64 www

[root@yu62 ~] # curl www.mobanche.com

192.168.1.63 www

[root@yu61 conf] # tail-2 / application/nginx/logs/access.log

192.168.1.61-[18/May/2017:19:44:04 + 0800] "GET / HTTP/1.1" 200 17 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh4/1.4.2 ""-"

192.168.1.61-[18/May/2017:19:44:05 + 0800] "GET / HTTP/1.1" 200 17 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh4/1.4.2 ""-"

5. Modify the configuration file again and add a proxy server to get the user's IP configuration

[root@yu61 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Upstream www_server_pools {

Server 192.168.1.63:80 weight=1

Server 192.168.1.64:80 weight=1

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Proxy_pass http://www_server_pools;

Proxy_set_header Host $host

Proxy_set_header X-Forwarded-For $remote_addr

}

}

}

The client for viewing needs to open the log record

6. Testing

[root@yu64 conf] # tail-2 / application/nginx/logs/access.log

192.168.1.61-[18/May/2017:19:56:19 + 0800] "GET / HTTP/1.0" 200 17 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh4/1.4.2 "192.168.1.62"

192.168.1.61-[18/May/2017:19:56:19 + 0800] "GET / HTTP/1.0" 200 17 "-" curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh4/1.4.2 "192.168.1.62"

Practice 3: proxy forwarding based on directory address in URL

Experimental topology

(1) when a user accesses www.mobanche.com/upload/xxx, the proxy server allocates the request to the upload server pool to process data.

(2) when the user accesses www.mobanche.com/static/xxx, the proxy server will assign the request to the dynamic upload server pool to request data.

(3) when the user accesses www.mobanche.com//xxx without any specified directory address path. The proxy server assigns the request to the default dynamic server pool to request data

1. Modify the configuration file and add the address pool

[root@yu61 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Upstream upload_pools {

Server 192.168.1.63:80 weight=1

}

Upstream static_pools {

Server 192.168.1.64:80 weight=1

}

Upstream default_pools {

Server 192.168.1.62:80 weight=1

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Proxy_pass http://default_pools;

Proxy_set_header Host $host

Proxy_set_header X-Forwarded-For $remote_addr

}

Location / static/ {

Proxy_pass http://static_pools;

Proxy_set_header Host $host

Proxy_set_header X-Forwarded-For $remote_addr

}

Location / upload/ {

Proxy_pass http://upload_pools;

Proxy_set_header Host $host

Proxy_set_header X-Forwarded-For $remote_addr

}

}

}

2. Restart Nginx

[root@yu61 conf] #.. / sbin/nginx-t

Nginx: the configuration file / application/nginx-1.6.3/conf/nginx.conf syntax is ok

Nginx: configuration file / application/nginx-1.6.3/conf/nginx.conf test is successful

[root@yu61 conf] #.. / sbin/nginx-s reload

3. Test static separation

[root@yu64 www] # cat / etc/hosts

192.168.1.64 bbs.mobanche.com

192.168.1.64 www.mobanche.com

[root@yu64 nginx] # cd html/www/

[root@yu64 www] # mkdir static

[root@yu64 www] # echo static_pools > static/index.html

[root@yu64 www] # curl http://www.mobanche.com/static/index.html

Static_pools

[root@yu63 www] # cat / etc/hosts

192.168.1.63 bbs.mobanche.com

192.168.1.63 www.mobanche.com

[root@yu63 nginx] # cd html/www/

[root@yu63 www] # mkdir upload

[root@yu63 www] # echo upload_pools > upload/index.html

[root@yu63 www] # curl http://www.mobanche.com/upload/index.html

Upload_pools

[root@yu65 www] # cat / etc/hosts

192.168.1.65 bbs.mobanche.com

192.168.1.65 www.mobanche.com

[root@yu65 nginx] # cd html/www/

[root@yu65 www] # echo default_pools > index.html

[root@yu65 www] # curl http://www.mobanche.com

Default_pools

Practical 4:Nginx load balancing detects node status

1. Install the software

[root@yu61 tools] # pwd

/ home/yu/tools

[root@yu61 tools] # wget https://codeload.github.com/yaoweibin/nginx_upstream_check_module/zip/master

[root@yu61 tools] # unzip master

[root@yu61 tools] # cd nginx-1.6.3

[root@yu61 nginx-1.6.3] # patch-p1 <.. / nginx_upstream_check_module-master/check_1.5.12+.patch

[root@yu61 nginx-1.6.3] # / configure-- user=nginx-- group=nginx-- prefix=/application/nginx-1.6.3-- with-http_stub_status_module-- with-http_ssl_module-- add-module=../nginx_upstream_check_module-master

-- add-module=../nginx_upstream_check_module-master

[root@yu61 nginx-1.6.3] # make

[root@yu61 nginx-1.6.3] # mv / application/nginx/sbin/nginx {, .ori}

[root@yu61 nginx-1.6.3] # cp. / objs/nginx / application/nginx/sbin/

2. Detect configuration files

[root@yu61 nginx-1.6.3] # / application/nginx/sbin/nginx-t

Nginx: the configuration file / application/nginx-1.6.3/conf/nginx.conf syntax is ok

Nginx: configuration file / application/nginx-1.6.3/conf/nginx.conf test is successful

[root@yu61 nginx-1.6.3] # / application/nginx/sbin/nginx-V

Nginx version: nginx/1.6.3

Built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)

TLS SNI support enabled

Configure arguments:-user=nginx-- group=nginx-- prefix=/application/nginx-1.6.3-- with-http_stub_status_module-- with-http_ssl_module-- add-module=../nginx_upstream_check_module-master

Check interval=3000 rise=2 fall=5 timeout=1000 type=http

3. Modify the configuration file

[root@yu61 conf] # cat nginx.conf

Worker_processes 1

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Sendfile on

Keepalive_timeout 65

Upstream static_pools {

Server 192.168.1.64:80 weight=1

Server 192.168.1.63:80 weight=1

Check interval=3000 rise=2 fall=5 timeout=1000 type=http

}

Upstream default_pools {

Server 192.168.1.62:80 weight=1

}

Server {

Listen 80

Server_name www.mobanche.com

Location / {

Root html

Index index.html index.htm

Proxy_pass http://default_pools;

Proxy_set_header Host $host

Proxy_set_header X-Forwarded-For $remote_addr

}

Location / status {

Check_status

Access_log off

}

}

}

4. Restart the service test

[root@yu61 conf] #.. / sbin/nginx-t

Nginx: the configuration file / application/nginx-1.6.3/conf/nginx.conf syntax is ok

Nginx: configuration file / application/nginx-1.6.3/conf/nginx.conf test is successful

[root@yu61 conf] #.. / sbin/nginx-s stop

[root@yu61 conf] #.. / sbin/nginx

After reading the above methods and steps of mastering Nginx reverse proxy and load balancing to achieve dynamic and static separation, if there is anything else you need to know, you can find what you are interested in in the industry information or find our professional technical engineer to answer, the technical engineer has more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report