Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is current restriction?

2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "what is flow restriction". In the operation of actual cases, many people will encounter such a dilemma. Then let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Current limiting algorithm

When we do current limiting, we have some commonly used current limiting algorithms, including counter current limiting, token bucket current limiting, leaky bucket current limiting.

1. Token bucket current limit

The principle of the token bucket algorithm is that the system puts tokens into the bucket at a certain rate, and discards the tokens when they are filled; when the request comes, it will first take the token out of the bucket, and if the token can be obtained, it can continue to complete the request, otherwise it can wait or deny service; the token bucket allows a certain degree of burst traffic, which can be handled as long as there is a token, and multiple tokens are supported at a time.

two。 Leaky bucket current limit

The principle of the leaky bucket algorithm is that requests flow out at a fixed constant rate, and the inflow rate is arbitrary. When the number of requests exceeds the capacity of the bucket, new requests wait or deny service. It can be seen that the leaky bucket algorithm can forcibly limit the transmission speed of data.

3. Counter current limit

Counter is a relatively simple and rough algorithm, which is mainly used to limit the total number of concurrency, such as database connection pool, thread pool and second-kill concurrency. Counter current limit is carried out as long as the total number of requests in a certain period of time exceeds the set threshold.

How to limit current

After understanding the current-limiting algorithm, we need to know where to limit the current and how to limit the current; for a system, we can often limit the current at the access layer, which in most cases can be directly handled by middleware such as nginx,OpenResty; we can also limit the current at the business layer, which needs to be handled by relevant current-limiting algorithms according to our different business requirements.

Business layer current limit

For the business layer, we may be single-node, multi-node user-bound, or multi-node unbound. At this time, we need to distinguish between intra-process current limit and distributed current limit.

Intra-process current restriction

In-process flow restriction is relatively simple. Guava is a sharp tool we often use. Let's take a look at how to limit the total concurrency of the interface, the number of requests in a certain time window, and the smoother current limit using token bucket and leaky bucket algorithms.

Limit the total concurrency of an interface

You only need to configure a total concurrency, then use a calculator to record each request, and then compare it with the total concurrency:

Private static int max = 10 private static AtomicInteger limiter = new AtomicInteger (); if (limiter.incrementAndGet () > max) {System.err.println ("maximum limit exceeded"); return;}

Limit the number of time window requests

To limit the number of requests for an API within a specified time, you can use the cache of guava to cache the counter, and then set the expiration time. For example, set the maximum request per minute to 100:

LoadingCache counter = CacheBuilder.newBuilder (). ExpireAfterWrite (1, TimeUnit.MINUTES) .build (new CacheLoader () {@ Overridepublic AtomicLong load (Long key) throws Exception {return new AtomicLong (0);}}); private static int max = 100 curMinutes = System.currentTimeMillis () / 1000 * 60 if (counter.get (curMinutes). IncrementAndGet () > max) {System.err.println ("time window requests exceed limit"); return;}

The expiration time is one minute and zero is cleared automatically every minute. This processing method may exceed the limit, for example, there are no messages in the first 59 seconds, and 200 messages come at 60. At this time, 100 messages are accepted first, and the expiration counter is cleared 0, and then 100 messages are accepted. This situation can be solved by referring to the sliding window idea of TCP.

Smooth current limit request

The counter method is relatively rough, and token bucket and leaky bucket flow limiting algorithms are relatively smooth. You can directly use the RateLimiter class provided by guava:

RateLimiter limiter = RateLimiter.create (2); System.out.println (limiter.acquire (4)); System.out.println (limiter.acquire ()); System.out.println (limiter.acquire ()); System.out.println (limiter.acquire (2)); System.out.println (limiter.acquire ()); System.out.println (limiter.acquire ())

Create (2) indicates that the bucket capacity is 2 and adds 2 tokens per second, that is, a token is added in 500ms. Acquire () indicates that a token is obtained from it, and the return value is the waiting time. The output is as follows:

0.01.9986330.496440.5002240.9993350.500186

You can see that this algorithm allows for certain emergencies. The waiting time for obtaining four tokens is 0 for the first time, 2 seconds for later acquisition, and 500 milliseconds for each subsequent acquisition.

Distributed current limit

Now most systems use multi-node deployment, so a business may be processed in multiple processes, so distributed current limitation is essential at this time, such as the common second kill system, which may have N business logic nodes at the same time.

The conventional approach is to use Redis+lua and OpenResty+lua to atomize current-limiting services while ensuring high performance; both Redis and OpenResty are known for high performance, but also provide atomization solutions, as shown below

Redis+lua

Redis processes messages in a single thread on the server side, and supports the execution of lua scripts. You can implement the flow-limiting logic in lua scripts to ensure atomicity, as follows:

-- current limiting keylocal key = KEYS [1]-current limiting size local limit = tonumber (ARGV [1])-expiration time local expire = tonumber (ARGV [2]) local current = tonumber (redis.call ('get',key) or "0") if current + 1 > limit thenreturn 0Shielseredis.call ("INCRBY", key, 1) redis.call ("EXPIRE", key, expire) return current + 1end

The above counter algorithm is used to achieve current limit. You can pass the current limit key, the current limit size and the validity period of the key where the lua is called. If the returned result is 0, the current cumulative value is returned.

OpenResty+lua

The core of OpenResty is nginx, but many third-party modules are added on this basis. Ngx_lua module embeds lua into nginx, so that nginx can be used as a web server; there are other commonly used development modules such as lua-resty-lock,lua-resty-limit-traffic,lua-resty-memcached,lua-resty-mysql,lua-resty-redis and so on.

In this section, we first use the lua-resty-lock module to implement a simple counter current limit. The related lua code is as follows:

Local locks = require "resty.lock"; local function acquire () local lock = locks:new ("locks"); local elapsed, err = lock:lock ("limit_key"); local limit_counter = ngx.shared.limit_counter;-- get client iplocal key= ngx.var.remote_addr;-- current limit size local limit = 5; local current = limit_counter:get (key);-- print key and current value ngx.say ("key=".. key.. ", value=".. tostring (current)) If current ~ = nil and current + 1 > limit then lock:unlock (); return 0ten endif current = = nil then limit_counter:set (key,1,5);-- set the expiration time to 5 seconds else limit_counter:incr (key,1); endlock:unlock (); return 1 switch end

The above is an example of limiting the current of ip. Because of the need to ensure atomicity, the resty.lock module is used. At the same time, it is also similar to redis that sets the expiration time to reset. In addition, you need to pay attention to the release of locks. You also need to set two shared dictionaries.

Http {... # lua_shared_dict defines a piece of shared memory space called name, and the memory size is lua_shared_dict locks 10m. The shared memory object defined by size; through this command is visible to all worker processes in Nginx.

The access layer is usually the traffic entrance, Nginx is used by many systems as the traffic entrance, of course, OpenResty is no exception, and OpenResty provides more powerful features, such as the lua-resty-limit-traffic module to be introduced here, which is a powerful current limiting module. Before using lua-resty-limit-traffic, let's take a look at how to use OpenResty.

OpenResty installation and use

Download installation configuration

Go directly to the official download: http://openresty.org/en/download.html, start, reload, stop command as follows:

Nginx.exenginx.exe-s reloadnginx.exe-s stop

Open the ip+ port and you can see: Welcome to OpenResty! It means that the startup is successful.

Lua script instance

First of all, you need to make the following configuration under the http directory of nginx.conf:

Http {... Lua_package_path "/ lualib/?.lua;;"; # lua module lua_package_cpath "/ lualib/?.so;;"; # c module include lua.conf; # Import custom lua configuration file}

A lua.conf is customized here, in which all requests related to lua are configured and placed under the same path as nginx.conf. For example, a test.lua is configured as follows:

# lua.conf server {charset utf-8; # set Encoding listen 8081; server_name _; location / test {default_type 'text/html'; content_by_lua_file lua/api/test.lua;}}

Here you put all the lua files in the lua/api directory, such as the simplest hello world:

Ngx.say ("hello world"); lua-resty-limit-traffic module

Lua-resty-limit-traffic provides three ways to limit the maximum number of concurrent connections, the number of time window requests, and the number of smooth limit requests, respectively: resty.limit.conn,resty.limit.count,resty.limit.req;-related documents can be found directly in pod/lua-resty-limit-traffic, with complete examples

The following three shared dictionaries will be used and configured in advance under http:

Http {lua_shared_dict my_limit_conn_store 100m; my_limit_count_store 100m my_limit_req_store 100m;}

Limit the maximum number of concurrent connections

The resty.limit.conn provided limits the maximum number of connections. The specific script is as follows:

Local limit_conn = require "resty.limit.conn"-B Clocal lim, err = limit_conn.new ("my_limit_conn_store", 1,0,0.5) if not lim thenngx.log (ngx.ERR, "failed to instantiate a resty.limit.conn object:" err) return ngx.exit (500) endlocal key = ngx.var.binary_remote_addrlocal delay, err = lim:incoming (key, true) if not delay thenif err = "rejected" thenreturn ngx.exit (502) endngx.log (ngx.ERR "failed to limit req:", err) return ngx.exit (500) endif lim:is_committed () thenlocal ctx = ngx.ctx ctx.limit_conn = lim ctx.limit_conn_key = key ctx.limit_conn_delay = delayendlocal conn = errif delay > = 0.001 thenngx.sleep (delay) end

The new () parameters are: dictionary name, maximum number of concurrent requests allowed, number of burst connections allowed, connection delay

Commit in incoming () is a Boolean value. If it is true, it indicates the number of current requests recorded. Otherwise, run it directly.

Return value: if the request does not exceed the connvalue specified in the method, this method returns 0 as the delay and the number of concurrent requests (or connections) at the current time

Limit the number of time window requests

The resty.limit.count provided can limit the number of requests within a time window. The specific script is as follows:

Local limit_count = require "resty.limit.count"-B C muri-rate limit is 20/10slocal lim, err = limit_count.new ("my_limit_count_store", 20,10) if not lim thenngx.log (ngx.ERR, "failed to instantiate a resty.limit.count object:", err) return ngx.exit (500) endlocal local key = ngx.var.binary_remote_addr--B Clocal delay, err = lim:incoming (key True) if not delay thenif err = "rejected" thenreturn ngx.exit endngx.log (ngx.ERR, "failed to limit count:", err) return ngx.exit (500) end

The three parameters specified in new () are the dictionary name, the specified request threshold, and the window time before the number of requests is reset, in seconds

Commit in incoming () is a Boolean value. If it is true, it indicates the number of current requests recorded. Otherwise, run it directly.

Return value: if the number of requests is within the limit, the delay of the current request being processed and the remaining number of requests to be processed are returned

Smooth limit the number of requests

The resty.limit.req provided can restrict requests in a smoother way, as shown in the following script:

Local limit_req = require "resty.limit.req"-B C muri-limit to 200requests / sec, giving 100requests / sec burst requests In other words, the maximum number of requests per second can be between 200 and 300. Local lim, err = limit_req.new ("my_limit_req_store", 200,100) if not lim thenngx.log (ngx.ERR, "failed to instantiate a resty.limit.req object:", err) return ngx.exit (500) endlocal key = ngx.var.binary_remote_addrlocal delay, err = lim:incoming (key, true) if not delay thenif err = "rejected" thenreturn ngx.exit (503) endngx.log (ngx.ERR, "failed to limit req:" Err) return ngx.exit (500) endif delay > = 0.001 thenlocal excess = err ngx.sleep (delay) end

The three parameters of new () are: dictionary name, request rate (per second) threshold, and excessive number of requests allowed to delay per second.

Commit in incoming () is a Boolean value. When it is true, it indicates the number of current requests recorded, otherwise it runs directly, which can be understood as a switch.

Return value: if the number of requests is within the limit, this method returns 0 as the delay of the current time and the (zero) number of excessive requests per second

This is the end of "what is flow restriction"? thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report