In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of "Nginx proxy function and load balancer case analysis". The editor shows you the operation process through the actual case, and the operation method is simple, fast and practical. I hope this article "Nginx proxy function and load balancer case analysis" can help you solve the problem.
Configuration instructions for the nginx proxy service
1. There is the following configuration in the http module. When the agent encounters a status code of 404, we direct the 404 page to Baidu.
Error_page 404 https://www.baidu.com; # error page
However, this configuration, careful friends can find that it does not work.
If we want it to work, we must use it with the following configuration
The copy code is as follows:
Proxy_intercept_errors on; # if the status code returned by the proxy server is 400 or greater, the set error_page configuration works. The default is off.
2. If our agent only allows one way to accept get,post requests
Proxy_method get; # supports the client's request method. Post/get
3. Set the supported version of http protocol
The copy code is as follows:
Proxy_http_version 1.0; # http protocol version 1.0 and 1.1 provided by nginx server is set to version 1.0 by default
4. If your nginx server acts as a proxy for two web servers, and the load balancing algorithm uses polling, then when one of your machines web program iis is closed, that is to say, web cannot access, then the nginx server will still distribute requests to this inaccessible web server. If the response connection time here is too long, it will cause the client's page to wait for a response all the time, and the experience will be discounted for the user. How can we avoid this happening here? Here I am accompanied by a picture to illustrate the problem.
If this happens to the web2 in the load balancer, nginx will first go to web1 the request, but if the nginx is not configured properly, it will continue to distribute the request channel web2, and then wait for the web2 response until our response time expires before redistributing the request to web1. If the response time here is too long, the longer the waiting time for users will be.
The following configuration is one of the solutions.
Proxy_connect_timeout 1; # the timeout time for the nginx server to establish a connection with the proxied server. The default is 60 seconds for the nginx server to wait for a response after the read request is sent by the proxy server group. The timeout for waiting for a response after the proxy_send_timeout 1; # nginx server wants to be sent a write request by the proxy server group defaults to 60 seconds. When the proxy_ignore_client_abort on; # client is disconnected, whether the nginx server terminates the request to the proxy server. The default is off.
5. If a group of servers are configured as proxied servers using the upstream instruction, the access algorithm in the server follows the configured load balancing rules. At the same time, you can use this instruction to configure that the request will be handed over to the next group of servers for processing in sequence when any exception occurs.
The copy code is as follows:
The server group set in proxy_next_upstream timeout; # reverse proxy upstream, the status value returned by the proxy server in the event of a failure. Error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off
Error: an error occurred while establishing a connection or sending a request or reading response information to the proxied server.
Timeout: establish a connection and the server times out when you want to be sent a request or read response information by a proxy server.
Invalid_header: the response header returned by the proxy server is abnormal.
Off: cannot distribute the request to the proxied server.
The status code returned by the proxied server of http_400,....: is 400, 500, 500, 502, etc.
6. If you want to get the real ip of the customer through http instead of the ip address of the proxy server, then make the following settings.
Proxy_set_header host $host; # $host is used as long as the domain name accessed by the user in the browser is bound to vip vip and has rs; under it. Host is to access the domain name and port in url www.taobao.com:80proxy_set_header x-real-ip $remote_addr; # assign the source ip [$remote_addr, establish the information in the http connection header] to XColleRealip. so $x-real-ip in the code to get the source ipproxy_set_header x-forwarded-for $proxy_add_x_forwarded_for # when nginx is used as the proxy server, the ip list will record the passing machine ip and proxy machine ip, and separate them with [,] Use echo $x-forwarded-for | awk-f,'{print $1}'as the source ip in the code
About some related articles about x-forwarded-for and x-real-ip, I recommend a blogger's: x-forwarded-for in the http request header. This blogger has a series of articles on http protocol, which we recommend to follow.
7. The following is a section of my configuration file about the agent configuration, for reference only.
Include mime.types; # File extension and File Type Mapping Table default_type application/octet-stream; # default file type, default is text/plain # access_log off; # unservice log log_format myformat'$remote_addr-$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; # Custom format access_log log/access.log myformat # combined is the default value of sendfile on; # for log format. It allows sendfile to transfer files. It defaults to off, which can be used in http block, server block and location block. Sendfile_max_chunk 100k; # the number of transfers per call per process cannot be greater than the set value. The default is 0, that is, there is no upper limit. Keepalive_timeout 65; # connection timeout. The default is 75s, which can be found in the http,server,location block. Proxy_connect_timeout 1; # the timeout time for the nginx server to establish a connection with the proxied server. The default is 60 seconds for the nginx server to wait for a response after the read request is sent by the proxy server group. The timeout for waiting for a response after the proxy_send_timeout 1; # nginx server wants to be sent a write request by the proxy server group defaults to 60 seconds. Proxy_http_version 1.0; # nginx server provides proxy service for http protocol version 1.0 and 1.1, which is set to version 1.0 by default. # proxy_method get; # supports the request method of the client. When the post/get; proxy_ignore_client_abort on; # client is disconnected, whether the nginx server terminates the request to the proxy server. The default is off. Proxy_ignore_headers "expires"set-cookie"; # nginx server does not deal with the header fields hit by the set http, where multiple spaces can be set. Proxy_intercept_errors on; # if the status code returned by the proxy server is 400 or greater, the set error_page configuration works. The default is off. The upper limit of the hash table capacity of proxy_headers_hash_max_size 1024; # for storing http headers is 512 characters by default. The proxy_headers_hash_bucket_size 128; # nginx server requests the size of the hash table that holds the http header. The default is 64 characters. The server group set in proxy_next_upstream timeout; # reverse proxy upstream, the status value returned by the proxy server in the event of a failure. Error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off # proxy_ssl_session_reuse on; defaults to on. If we find "ssl3_get_finshed:digest check failed" in the error log, we can set this instruction to off.
Detailed explanation of nginx load balancing
What are the load balancing algorithms in nginx? I will give you a detailed explanation of this knot if the operation is configured.
First of all, tell you about the upstream configuration, this configuration is to write a set of proxy server addresses, and then configure the load balancing algorithm. The address of the proxied server here is written in 2.
Upstream mysvr {server 192.168.10.121 server 3333; server {.... Location ~ * ^. + ${proxy_pass http://mysvr; # request to mysvr defined server list} upstream mysvr {server http://192.168.10.121:3333; server http://192.168.10.122:3333;} server {.... Location ~ * ^. + ${proxy_pass mysvr; # request to mysvr defined server list}
Then, let's do something practical.
1. Hot backup: if you have 2 servers, when one server has an accident, the second server will be enabled to provide services. The order in which the server processes requests: aaaaaa suddenly a hangs up, bbbbbbbbbbbbbb.
Upstream mysvr {server 127.0.0.1 backup; 7878; server 192.168.10.121 backup; # Hot standby}
2. Polling: nginx defaults to polling whose weights are all 1 by default, and the order in which the server processes requests: ababababab....
Upstream mysvr {server 127.0.0.1 virtual 7878; server 192.168.10.121 virtual 3333;}
3. Weighted polling: different number of requests distributed to different servers according to the configured weight. If it is not set, the default is 1. The order of requests for the following servers is: abbabbabbabbabb....
Upstream mysvr {server 127.0.0.1:7878 weight=1; server 192.168.10.121:3333 weight=2;}
4. Ip_hash:nginx will make the same client ip request the same server.
Upstream mysvr {server 127.0.0.1 virtual 7878; server 192.168.10.121 virtual 3333; ip_hash;}
5. If you don't quite understand the above four equalization algorithms, it may be easier for you to take a look at the picture I matched in the previous article.
Do you feel that the load balancing configuration of nginx is particularly simple and powerful here? then it's not over yet. Let's go on, ha, here's bullshit.
Several state parameters of nginx load balancer configuration are explained.
Down, which means that the current server does not participate in load balancing for the time being.
Backup, reserved backup machine. The backup machine is requested only when all other non-backup machines are malfunctioning or busy, so this machine is the least stressful.
Max_fails, which allows the number of failed requests. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.
Fail_timeout, the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout.
Upstream mysvr {server 127.0.0.1 weight=2 max_fails=2 fail_timeout=2; server 7878 weight=2 max_fails=2 fail_timeout=2; server 192.168.10.121 weight=2 max_fails=2 fail_timeout=2; server 3333 weight=1 max_fails=2 fail_timeout=1;} on "Nginx proxy function and load balancer instance analysis", thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.