In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is the current limiting strategy in java high concurrency scenarios". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "what is the current limiting strategy in java high concurrency scenarios"!
Why current limiting is needed
To give a simpler example, normally, an employee A can handle 10 jobs a day, and suddenly one day there are 100 workloads. At this time, if employee A still handles 100, there is only one possibility that this employee is overwhelmed.
If we can know in advance that 100 tasks are coming, we can solve them temporarily by increasing the number of employees or defining message queues, etc.
But many times we can't anticipate these surprises. According to Murphy's Law, bad things tend to happen one after another, and it is possible that a failure at one point will cause a global failure (avalanche). So we had to do something to protect our system. Current limiting is one of them.
For such scenarios, we can also do some current limiting measures without affecting the overall system.
Current limiting mode counter (sliding window protocol)
Idea: Speed limit, we may think of the first should be, I pass a counter, technology, if the counter threshold is exceeded, it means that the speed is too fast. One counter per second.
For ease of reading, I've taken screenshots of only the main code snippets.
This has a problem is: the granularity is too large, uneven, for less than 1 second, there is no way to distinguish.
Can we break down the granularity, one second into 10 100 milliseconds. Every 100 milliseconds there is a counter. Understand TCP/IP should know that TCP/IP in order to increase transmission speed and control transmission speed, there is a "sliding window protocol."
Even if it is dismantled, it cannot solve the problem of uniform speed limit.
Moreover, there is a critical point problem. If there are 10 requests per second, between the first second and the second second, 10 requests in the second half of the first second and 10 requests in the first half of the second, there will be 20 requests in the second half of the first second + the first half of the second, which does not play a role in limiting the speed.
Is there a better way?
Leaky Bucket Algorithm for Speed Limit Mode
In life, if a bucket has a thin eye, we fill it with water, and we can see that the water is falling drop by drop at a constant speed. Can we achieve this way through the program?
Idea: Bucket is container, drop is request. If the bucket is full, the request is rejected. If it is not full, the request is processed.
code snippet
In the segment code
First, calculate how much water has leaked since this request and the last request.
Check how much water is left in the bucket and see if it overflows.
If overflow rejects the request, if not add the current drop. Processing requests.
For many application scenarios, in addition to being able to limit the average transmission rate of data, it is also required to allow some degree of burst transmission. At this time, leaky bucket algorithm may not be appropriate, token bucket algorithm is more suitable.
What does that mean? That is to say, I have been idle for a long time in front of the service, and suddenly there are many requests (within the capacity of the bucket), and I have to deal with these quickly.
Token Bucket Algorithm for Speed Limit Mode
Idea: Create tokens at a constant rate, throw them into the bucket, and see if there are extra tokens each time the request comes. If there is an acquisition token to perform normal business, there is no speed limit.
code snippet
This allows for instantaneous mass processing and then speed limiting.
When the request comes, first calculate the number of tokens currently placed in the bucket. If you calculate here, you can place tokens at a constant speed without starting a thread. This is called lazy calculation.
Then count the number of tokens the bucket owns. Then get the token. Deny or deal.
The above code can be viewed on Github.
https://github.com/hirudy/java_lib/tree/master/src/main/java/com/hirudy/limiter
RateLimiter
Amway everyone an efficient speed limiter.
Google's base library guava contains a token bucket-based speed limiter called RateLimiter. It's also easy to use.
At this point, I believe that everyone has a deeper understanding of "what is the current limiting strategy under java high concurrency scenarios", so let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.