Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the current limiting algorithm commonly used in Java

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

Most people do not understand the knowledge points of this article "how to achieve common current limiting algorithms in Java", so the editor summarizes the following contents, detailed contents, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this article, "how to achieve common current limiting algorithms in Java".

Why should current be limited?

Increase the number of entrants as much as possible when available, and the rest are waiting in line or returning friendly prompts to ensure that the users inside the system can be used properly to prevent the system avalanche.

Current limiting algorithm

There are many current limiting algorithms, and there are three common algorithms, namely, counter algorithm, leaky bucket algorithm and token bucket algorithm.

(1) counter:

The maximum number of requests processed is fixed over a period of time, and the excess is not processed.

(2) leaky bucket:

The size of the leaky bucket is fixed, the processing speed is fixed, but the request entry speed is not fixed (too many requests will be discarded when there are too many requests in an emergency).

(3) token bucket:

The size of the token bucket is fixed, the token generation speed is fixed, but the consumption token (that is, requests) is not fixed (in some cases where there are too many requests at some time); each request takes the token out of the token bucket and discards the request if there is no token.

Counter current limit

The maximum number of requests processed is fixed over a period of time, and the excess is not processed.

For example, we stipulate that for interface A, we can't visit more than 100 times a minute.

Then we can do this:

At the beginning, we can set a counter counter. Every time a request comes, counter is incremented by 1. If the value of counter is greater than 100 and the interval between the request and the first request is less than 1 minute, then there are too many requests and access is denied.

If the interval between the request and the first request is more than 1 minute, and the value of counter is still within the current limit, then it is so simple and rude to reset counter.

Code implementation:

Import java.util.concurrent.CountDownLatch;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.atomic.AtomicInteger;import java.util.concurrent.atomic.AtomicLong;// counter current limit public class CounterLimiter {/ / start time private static long startTime = System.currentTimeMillis (); / / time interval 1000ms private static long interval = 1000; / / within each time interval, limit quantity private static long limit = 3 / / accumulator private static AtomicLong accumulator = new AtomicLong (); / * true indicates release, but the request has passed the limit of * false to prevent the request from passing * / public static boolean tryAcquire () {long nowTime = System.currentTimeMillis (); / / determine whether if (nowTime) in the previous interval

< startTime + interval) { //如果还在上个时间间隔内 long count = accumulator.incrementAndGet(); if (count startTime + interval) { startTime = nowTime; accumulator.set(0); } } //再次进行判断 long count = accumulator.incrementAndGet(); if (count { try { for (int j = 0; j < turns; j++) { boolean flag = tryAcquire(); if (!flag) { // 被限制的次数累积 limited.getAndIncrement(); } Thread.sleep(200); } } catch (Exception e) { e.printStackTrace(); } //等待所有线程结束 countDownLatch.countDown(); }); } try { countDownLatch.await(); } catch (InterruptedException e) { e.printStackTrace(); } float time = (System.currentTimeMillis() - start) / 1000F; //输出统计结果 System.out.println("限制的次数为:" + limited.get() + ",通过的次数为:" + (threads * turns - limited.get())); System.out.println("限制的比例为:" + (float) limited.get() / (float) (threads * turns)); System.out.println("运行的时长为:" + time + "s"); }} 计数器限流的不足: 这个算法虽然简单,但是存在临界问题,我们看下图: 从上图中我们可以看到,假设有一个恶意用户,他在0:59时,瞬间发送了100个请求,并且1:00又瞬间发送了100个请求,那么其实这个用户在 1秒里面,瞬间发送了200个请求。 我们刚才规定的是1分钟最多100个请求(规划的吞吐量),也就是每秒钟最多1.7个请求,用户通过在时间窗口的重置节点处突发请求, 可以瞬间超过我们的速率限制。 用户有可能通过算法的这个漏洞,瞬间压垮我们的应用。 漏桶限流 漏桶算法限流的基本原理为:水(对应请求)从进水口进入到漏桶里,漏桶以一定的速度出水(请求放行),当水流入速度过大,桶内的总水量大于桶容量会直接溢出,请求被拒绝。 大致的漏桶限流规则如下: (1)进水口(对应客户端请求)以任意速率流入进入漏桶。 (2)漏桶的容量是固定的,出水(放行)速率也是固定的。 (3)漏桶容量是不变的,如果处理速度太慢,桶内水量会超出了桶的容量,则后面流入的水滴会溢出,表示请求拒绝。 ⭐漏桶算法其实很简单,可以粗略的认为就是注水漏水过程,往桶中以任意速率流入水,以一定速率流出水,当水超过桶容量(capacity)则丢弃,因为桶容量是不变的,保证了整体的速率。 以一定速率流出水, 削峰: 有大量流量进入时,会发生溢出,从而限流保护服务可用 缓冲: 不至于直接请求到服务器, 缓冲压力 代码实现: import java.util.concurrent.CountDownLatch;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.atomic.AtomicInteger;import java.util.concurrent.atomic.AtomicLong;//漏斗限流public class LeakBucketLimiter { //桶的大小 private static long capacity = 10; //流出速率,每秒两个 private static long rate = 2; //开始时间 private static long startTime = System.currentTimeMillis(); //桶中剩余的水 private static AtomicLong water = new AtomicLong(); /** * true 代表放行,请求可已通过 * false 代表限制,不让请求通过 */ public synchronized static boolean tryAcquire() { //如果桶的余量问0,直接放行 if (water.get() == 0) { startTime = System.currentTimeMillis(); water.set(1); return true; } //计算从当前时间到开始时间流出的水,和现在桶中剩余的水 //桶中剩余的水 water.set(water.get() - (System.currentTimeMillis() - startTime) / 1000 * rate); //防止出现 { try { for (int j = 0; j < turns; j++) { boolean flag = tryAcquire(); if (!flag) { // 被限制的次数累积 limited.getAndIncrement(); } Thread.sleep(200); } } catch (Exception e) { e.printStackTrace(); } //等待所有线程结束 countDownLatch.countDown(); }); } try { countDownLatch.await(); } catch (InterruptedException e) { e.printStackTrace(); } float time = (System.currentTimeMillis() - start) / 1000F; //输出统计结果 System.out.println("限制的次数为:" + limited.get() + ",通过的次数为:" + (threads * turns - limited.get())); System.out.println("限制的比例为:" + (float) limited.get() / (float) (threads * turns)); System.out.println("运行的时长为:" + time + "s"); }} 漏桶的不足: 漏桶的出水速度固定,也就是请求放行速度是固定的。 漏桶出口的速度固定,不能灵活的应对后端能力提升。比如,通过动态扩容,后端流量从1000QPS提升到1WQPS,漏桶没有办法。 令牌桶限流 令牌桶算法中新请求到来时会从桶里拿走一个令牌,如果桶内没有令牌可拿,就拒绝服务。 当然,令牌的数量也是有上限的。令牌的数量与时间和发放速率强相关,时间流逝的时间越长,会不断往桶里加入越多的令牌,如果令牌发放的速度比申请速度快,令牌桶会放满令牌,直到令牌占满整个令牌桶。 令牌桶限流大致的规则如下: (1)进水口按照某个速度,向桶中放入令牌。 (2)令牌的容量是固定的,但是放行的速度不是固定的,只要桶中还有剩余令牌,一旦请求过来就能申请成功,然后放行。 (3)如果令牌的发放速度,慢于请求到来速度,桶内就无牌可领,请求就会被拒绝。 总之,令牌的发送速率可以设置,从而可以对突发的出口流量进行有效的应对。 代码实现: import java.util.concurrent.CountDownLatch;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.atomic.AtomicInteger;import java.util.concurrent.atomic.AtomicLong;//令牌桶public class TokenBucketLimiter { //桶的容量 private static long capacity = 10; //放入令牌的速率,每秒2个 private static long rate = 2; //上次放置令牌的时间 private static long lastTime = System.currentTimeMillis(); //桶中令牌的余量 private static AtomicLong tokenNum = new AtomicLong(); /** * true 代表放行,请求可已通过 * false 代表限制,不让请求通过 */ public synchronized static boolean tryAcquire() { //更新桶中剩余令牌的数量 long now = System.currentTimeMillis(); tokenNum.addAndGet((now - lastTime) / 1000 * rate); tokenNum.set(Math.min(capacity, tokenNum.get())); //更新时间 lastTime += (now - lastTime) / 1000 * 1000; //桶中还有令牌就放行 if (tokenNum.get() >

0) {tokenNum.decrementAndGet (); return true;} else {return false;}} / / Test public static void main (String [] args) {/ / thread pool for multithreaded simulation testing ExecutorService pool = Executors.newFixedThreadPool (10); / / restricted number of times AtomicInteger limited = new AtomicInteger (0) / / number of threads final int threads = 2; / / number of execution wheels per thread final int turns = 20; / / Synchronizer CountDownLatch countDownLatch = new CountDownLatch (threads); long start = System.currentTimeMillis (); for (int I = 0; I

< threads; i++) { pool.submit(() ->

{try {for (int j = 0; j < turns; jacks +) {boolean flag = tryAcquire (); if (! flag) {/ / cumulative limited.getAndIncrement () } Thread.sleep;}} catch (Exception e) {e.printStackTrace ();} / wait for all threads to end countDownLatch.countDown ();}) } try {countDownLatch.await ();} catch (InterruptedException e) {e.printStackTrace ();} float time = (System.currentTimeMillis ()-start) / 1000F / / output statistical results System.out.println ("limit the number of times is" + limited.get () + ", pass the number of times is" + (threads * turns-limited.get (); System.out.println ("limit the ratio is: + (float) limited.get () / (float) (threads * turns) System.out.println ("running time is:" + time + "s");}}

Benefits of token buckets:

One of the benefits of token buckets is that they can easily cope with sudden egress traffic (enhanced back-end capabilities).

For example, the issuing speed of tokens can be changed, and the algorithm can increase the number of tokens according to the new sending rate, so that the outburst traffic can be handled.

The above is about the content of this article on "how to achieve common current-limiting algorithms in Java". I believe we all have some understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please pay attention to the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report