Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of reverse proxy and load balancing of Nginx

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "the detailed explanation of reverse proxy and load balancing of Nginx". In the daily operation, I believe many people have doubts about the detailed explanation of reverse proxy and load balancing of Nginx. The editor consulted all kinds of materials and sorted out simple and useful operation methods, hoping to help you answer the doubts of "detailed explanation of reverse proxy and load balancing of Nginx". Next, please follow the editor to study!

Configuration instructions for the Nginx proxy service

1. Set 404 page-oriented address

Error_page 404 https://www.runnob.com; # error page proxy_intercept_errors on; # if the status code returned by the proxy server is 400 or greater, the set error_page configuration works. The default is off.

2. If our agent only allows one way to accept get,post requests

Proxy_method get; # supports the client's request method. Post/get

3. Set the supported version of http protocol

Proxy_http_version 1.0; # http protocol version 1.0 and 1.1 provided by Nginx server is set to version 1.0 by default

4. If your nginx server acts as a proxy for two web servers, and the load balancing algorithm uses polling, then when one of your machines web program iis is closed, that is to say, web cannot access, then the nginx server will still distribute requests to this inaccessible web server. If the response connection time here is too long, it will cause the client's page to wait for a response all the time, and the experience will be discounted for the user. How can we avoid this happening here? Here I am accompanied by a picture to illustrate the problem.

If this happens to the web2 in the load balancer, nginx will first go to web1 the request, but if the nginx is not configured properly, it will continue to distribute the request channel web2, and then wait for the web2 response until our response time expires before redistributing the request to web1. If the response time here is too long, the longer the waiting time for users will be.

The following configuration is one of the solutions.

Proxy_connect_timeout 1; # the timeout time for the nginx server to establish a connection with the proxied server. The default is 60 seconds for the nginx server to wait for a response after the read request is sent by the proxy server group. The timeout for waiting for a response after the proxy_send_timeout 1; # nginx server wants to be sent a write request by the proxy server group defaults to 60 seconds. When the proxy_ignore_client_abort on; # client is disconnected, whether the nginx server terminates the request to the proxy server. The default is off.

5. If a group of servers are configured as proxied servers using the upstream instruction, the access algorithm in the server follows the configured load balancing rules. At the same time, you can use this instruction to configure that the request will be handed over to the next group of servers for processing in sequence when any exception occurs.

The server group set in proxy_next_upstream timeout; # reverse proxy upstream, the status value returned by the proxy server in the event of a failure.

The status value can be: error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off

Error: an error occurred while establishing a connection or sending a request or reading response information to the proxied server.

Timeout: establish a connection and the server times out when you want to be sent a request or read response information by a proxy server.

Invalid_header: the response header returned by the proxy server is abnormal.

Off: cannot distribute the request to the proxied server.

The status code returned by the proxied server of http_400,....: is 400, 500, 500, 502, etc.

6. If you want to get the real ip of the customer through http instead of the ip address of the proxy server, then make the following settings.

Proxy_set_header Host $host; # $host is used as long as the domain name accessed by the user in the browser is bound to VIP VIP and has RS; under it. Host is to access the domain name and port in URL www.taobao.com:80 proxy_set_header X-Real-IP $remote_addr; # assign the source IP [$remote_addr, establish the information in the HTTP connection header] to Xmuri RealmerIP; so $X-Real-IP in the code to get the source IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for # when nginx is used as the proxy server, the IP list will record the passing machine ip and proxy machine ip, and separate them with [,] Use echo $x-forwarded-for | awk-F,'{print $1}'as the source IP in the code

You can check out some related articles about X-Forwarded-For and X-Real-IP: X-Forwarded-For in the HTTP request header.

7. The following is a section of my configuration file about the agent configuration, for reference only.

Include mime.types; # File extension and File Type Mapping Table default_type application/octet-stream; # default file type, default to text/plain # access_log off; # unservice log log_format myFormat'$remote_addr-$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; # Custom format access_log log/access.log myFormat; # combined is the default sendfile on for log format # allow sendfile to transfer files. The default is off. You can transfer files in http block, server block and location block. Sendfile_max_chunk 100k; # the number of transfers per call per process cannot be greater than the set value. The default is 0, that is, there is no upper limit. Keepalive_timeout 65; # connection timeout. The default is 75s, which can be found in the http,server,location block. Proxy_connect_timeout 1; # the timeout time for the nginx server to establish a connection with the proxied server. The default is 60 seconds for the nginx server to wait for a response after the read request is sent by the proxy server group. The timeout for waiting for a response after the proxy_send_timeout 1; # nginx server wants to be sent a write request by the proxy server group defaults to 60 seconds. Proxy_http_version 1.0; # Nginx server provides proxy service for http protocol version 1.0 and 1.1, which is set to version 1.0 by default. # proxy_method get; # supports the request method of the client. When the post/get; proxy_ignore_client_abort on; # client is disconnected, whether the nginx server terminates the request to the proxy server. The default is off. Proxy_ignore_headers "Expires"Set-Cookie"; # Nginx server does not deal with the header fields hit by the set http, where multiple spaces can be set. Proxy_intercept_errors on; # if the status code returned by the proxy server is 400 or greater, the set error_page configuration works. The default is off. The upper limit of the hash table capacity of proxy_headers_hash_max_size 1024; # for storing http headers is 512 characters by default. The proxy_headers_hash_bucket_size 128; # nginx server requests the size of the hash table that holds the http header. The default is 64 characters. The server group set in proxy_next_upstream timeout; # reverse proxy upstream, the status value returned by the proxy server in the event of a failure. Error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off # proxy_ssl_session_reuse on; defaults to on. If we find "SSL3_GET_FINSHED:digest check failed" in the error log, we can set this instruction to off.

Detailed explanation of Nginx load balancing

In the detailed explanation of Nginx configuration, I said what load balancing algorithms nginx has. I will give you a detailed explanation of how to operate and configure this conclusion.

First of all, tell you about the upstream configuration, this configuration is to write a set of proxy server addresses, and then configure the load balancing algorithm. The address of the proxied server here is written in 2.

Upstream mysvr {server 192.168.10.121 server 3333; server {.... Location ~ * ^. + ${proxy_pass http://mysvr; # request to mysvr defined server list}}

Then, let's do something practical.

1. Hot backup: if you have 2 servers, when one server has an accident, the second server will be enabled to provide services. The order in which the server processes requests: AAAAAA suddenly A hangs, BBBBBBBBBBBBBB.

Upstream mysvr {server 127.0.0.1 backup; 7878; server 192.168.10.121 backup; # Hot standby}

2. Polling: nginx defaults to polling whose weights are all 1 by default, and the order in which the server processes requests: ABABABABAB....

Upstream mysvr {server 127.0.0.1 virtual 7878; server 192.168.10.121 virtual 3333;}

3. Weighted polling: different number of requests distributed to different servers according to the configured weight. If it is not set, the default is 1. The order of requests for the following servers is: ABBABBABBABBABB....

Upstream mysvr {server 127.0.0.1:7878 weight=1; server 192.168.10.121:3333 weight=2;}

4. Ip_hash:nginx will make the same client ip request the same server.

Upstream mysvr {server 127.0.0.1 virtual 7878; server 192.168.10.121 virtual 3333; ip_hash;}

5. If you do not quite understand the above four equalization algorithms, you can check the Nginx configuration details, which may be easier to understand.

Do you feel that the load balancing configuration of nginx is particularly simple and powerful here? then it's not over yet. Let's go on, ha, here's bullshit.

Several state parameters of nginx load balancer configuration are explained.

Down, which means that the current server does not participate in load balancing for the time being.

Backup, reserved backup machine. The backup machine is requested only when all other non-backup machines are malfunctioning or busy, so this machine is the least stressful.

Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.

Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout.

Upstream mysvr {server 127.0.0.1 weight=2 max_fails=2 fail_timeout=2; server 192.168.10.121 weight=2 max_fails=2 fail_timeout=2; server 3333 weight=1 max_fails=2 fail_timeout=1;} at this point, the study on "detailed explanation of reverse proxy and load balancing of Nginx" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report