In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces you how to limit http resource requests in Nginx, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Limit the number of links
1. Use the limit_conn_zone directive to define the key and set parameters for the shared memory area (which the worker process will use to share counters for key values). The first parameter specifies the expression to evaluate as the key. The second parameter, zone, specifies the name of the region and its size:
Limit_conn_zone $binary_remote_addr zone=addr:10m
two。 Use the limit_conn directive to apply restrictions in the context of location {}, server {}, or http {}. The first parameter is the name of the shared memory region set above, and the second parameter is the number of links allowed per key:
Location / download/ {limit_conn addr 1;}
When using the $binary_remote_addr variable as a parameter, it is based on the limit of the IP address, and you can also use the $server_name variable to limit the number of connections to a given server:
Http {limit_conn_zone $server_name zone=servers:10m; server {limit_conn servers 1000;}}
Limit the request rate
Rate limiting can be used to prevent DDoS,CC attacks or to prevent upstream servers from being flooded by too many requests at the same time. The method is based on the leaky bucket leaky bucket algorithm, which requests to reach the bucket at various rates and leave the bucket at a fixed rate. Before using rate limiting, you need to configure the global parameters of the leaky bucket:
Key-parameters, usually variables, used to distinguish one client from another
Shared memory zone-the name and size of the region in which these key states are retained (that is, "leaky bucket")
Rate-the request rate limit specified in requests per second (rUnip s) or requests per minute (rgram) ("leaky bucket emptying"). Requests per minute are used to specify a rate of less than one request per second.
These parameters are set using the limit_req_zone directive. This directive is defined at the http {} level-this approach allows you to apply different areas and request overflow parameters to different contexts:
Http {#... Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;}
With this configuration, a shared memory area with a size of 10m bytes and the name one will be created. This area holds the state of the client IP address set with the $binary_remote_addr variable. Note that $remote_addr also contains the client's IP address, while $binary_remote_addr retains the binary representation of the shorter IP address.
You can use the following data to calculate the optimal size of the shared memory area: the value size of the $binary_remote_addr IPv4 address is 4 bytes, and the storage state on the 64-bit platform occupies 128 bytes. As a result, the status information of about 16000 IP addresses occupies 1m bytes of the area.
If you run out of storage space when NGINX needs to add new entries, the oldest entries are deleted. If there is still not enough space to hold the new record, NGINX returns a 503 Service Unavailable status code, which can be redefined using the limit_req_status directive.
Once this area is set, you can use the limit_req directive anywhere in the NGINX configuration to limit the request rate, especially the server {}, location {} and http {} contexts:
Http {#... Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; server {#... Location / search/ {limit_req zone=one;}
With the above configuration, nginx will process no more than 1 request per second under the / search/ route, and these requests will be delayed so that the total rate is not greater than the set rate. NGINX will delay processing such requests until the "store" (shared store one) is full. For a request to reach a full bucket, NGINX will respond to a 503 Service Unavailable error (when the limit_req_status does not customize the set status code).
Restrict broadband
To limit the bandwidth of each connection, use the following limit_rate directive:
Location / download/ {limit_rate 50k;}
With this setting, clients will be able to download content at speeds of up to 50k/ seconds over a single connection. However, clients can open multiple connections to skip this limit. Therefore, if the goal is to prevent the download speed from exceeding the specified value, the number of connections should also be limited. For example, one connection per IP address (if using the shared memory area specified above):
Location / download/ {limit_conn addr 1; limit_rate 50k;}
To impose restrictions only after the client downloads a certain amount of data, use the limit_rate_after directive. It may be reasonable to allow clients to quickly download a certain amount of data (for example, header-movie index) and limit the rate at which the rest of the data can be downloaded (allowing users to watch movies instead of downloading).
The limit_rate_after 500k is the candidate rate 20k.
The following example shows a combined configuration to limit the number of connections and bandwidth. The maximum number of connections allowed is set to 5 connections per client address, which applies to most common situations, because modern browsers usually open up to 3 connections at a time. At the same time, only one connection is allowed in the location where the download is provided:
Http {limit_conn_zone $binary_remote_address zone=addr:10m server {root / www/data; limit_conn addr 5; location / {} location / download/ {limit_conn addr 1; limit_rate_after 1m; limit_rate 50k;}} that's all about how to restrict http resource requests in Nginx. I hope the above can help you and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 276
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.