In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
In the form of an example, this article explains the Nginx current-limiting related configuration from shallow to deep, which is a positive supplement to the brief official documentation.
Nginx current limiting uses the leaky bucket algorithm. If you are interested in the algorithm, you can read Wikipedia first. But do not understand this algorithm, does not affect the reading of this article.
empty barrels
We start with the simplest current-limiting configuration:
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s;server { location /login/ { limit_req zone=ip_limit; proxy_pass http://login_upstream; }}
$binary_remote_addr Limit traffic to client ip;zone=ip_limit:10m Limit rule name ip_limit, allowing 10MB memory space to record IP limit status;rate=10r/s Limit traffic at 10 requests per second location /login/Limit traffic to login
The throttling rate is 10 requests per second. If 10 requests arrive at an idle nginx at the same time, can they all be executed?
Bucket leak requests are uniform. What is the constant speed of 10r/s? A request leaks out every 100 ms.
In this configuration, the bucket is empty, and all requests that cannot be leaked in real time are rejected.
So if 10 requests arrive at the same time, only one will be executed, and the rest will be rejected.
This is not very friendly, and in most business scenarios we expect all 10 requests to be executed.
Burst
Let's change the configuration to solve the problem in the previous section
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s;server { location /login/ { limit_req zone=ip_limit burst=12; proxy_pass http://login_upstream; }}
burst=12 The size of the leaky bucket is set to 12
Logically, it is called a leaky bucket, which is implemented as a FIFO queue to temporarily buffer requests that cannot be executed.
This leakage speed is still 100ms a request, but concurrent, temporarily unable to get the execution of the request, can be cached first. New requests are rejected only when the queue is full.
In this way, while limiting the current, the leakage bucket also plays the role of peak clipping and valley filling.
In this configuration, if 10 requests arrive at the same time, they are executed sequentially, one every 100 ms.
Although it is executed, the delay is greatly increased due to queuing, which is still unacceptable in many scenarios.
NoDelay
Continue to modify the configuration to solve the problem of delay increase caused by too long Delay
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s;server { location /login/ { limit_req zone=ip_limit burst=12 nodelay; proxy_pass http://login_upstream; }}
nodelay brings forward the time to start executing the request. In the past, it was delayed until it leaked out of the bucket. Now it is not delayed. As long as it enters the bucket, it will start executing.
Either it is executed immediately or it is rejected, and the request is not delayed due to current limiting.
Because requests leak out of the bucket at a constant rate, and the bucket space is fixed, the final average is still 5 requests per second, and the goal of current limiting is achieved.
However, this also has shortcomings. The current limit is limited, but it is not so uniform. For example, if there are 12 requests arriving at the same time, then these 12 requests can be executed immediately, and then the subsequent requests can only be entered into the bucket at a constant speed, and 1 request can be executed in 100 ms. If there is no request for some time and the bucket is empty, then there may be 12 concurrent requests executing together.
In most cases, this current restriction is not uniform and is not a big problem. However, nginx also provides a parameter that controls the number of concurrent requests that are executed today.
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s;server { location /login/ { limit_req zone=ip_limit burst=12 delay=4; proxy_pass http://login_upstream; }}
delay=4 delay from the fifth request in the bucket
In this way, by controlling the value of the delay parameter, you can adjust the number of requests allowed to execute concurrently, so that the requests become uniform. It is necessary to control this number on some resource-consuming services.
Reference
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
https://www.nginx.com/blog/rate-limiting-nginx/
summary
The above is the Nginx current limiting configuration introduced by Xiaobian to you. I hope it will help you. If you have any questions, please leave a message to me. Xiaobian will reply to you in time. Thank you very much for your support!
If you think this article is helpful to you, welcome to reprint, please indicate the source, thank you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.