Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to process 1 million requests per minute in Go language

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Today, I will talk to you about how to process 1 million requests per minute in the Go language. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something from this article.

The simple method of Go language Program

Initially we took a very simple POST approach, just trying to put the task parallelization into a simple goroutine:

For medium loads, this may be effective for most people, but it quickly proves that it doesn't work very well under large loads. We expected a lot of requests, but when we deployed the first version into the product, we didn't see this order of magnitude of requests. We completely underestimated the traffic.

The above method is not good in several ways, and there is no way to control how many routines will be generated by the Go program that we are mass-producing. Since we received 1 million POST requests per minute, this code crashed quickly, of course.

Try again

We need to find a different way. From the beginning, we discussed how to keep the life cycle of the request handler very short and generate the processing process in the background. Of course, this must be done in the Ruby on Rails world, otherwise it will limit all available web processors, no matter which one of puma, unicorn, or passenger you are using (please do not participate in the JRuby discussion). Then we need to use general solutions to do this, such as Resque, Sidekiq, SQS, and so on. The list can go on, because there are many ways to do it.

So the second version is to create a cache channel, where we can queue some jobs and upload them to S3. Because we can control the maximum number of items in the queue, we have enough RAM to queue tasks in memory, and we think it is OK to cache jobs only in the channel queue.

Then the actual job is dequeued and processed, and we use a similar function:

To be honest, I don't know what we're thinking. It must be a late night full of Red Bull. This approach doesn't do us any good, and we use buffer queues to exchange defective concurrency, only delaying the time when the problem occurs. Our synchronous processor uploaded only one payload to S3 at a time, and because the rate of incoming requests was much higher than the ability of a single processor to upload to S3, the buffer channel quickly reached its limit. limits the ability of request handlers to queue more items.

We simply evade this problem, which eventually leads to the death of the system. After we deployed this defective version, our delay rate continued to grow at a constant rate.

A better solution

When using the go language channel, we decided to use the general pattern to create a second-order channel system, one for queuing jobs and the other to control how many operators operate on JobQueue at the same time.

The idea is to upload to S3 in parallel at some sustainable speed, which will neither impair machine performance nor generate connection errors from S3. So we chose to create an assignment / operator model. For those who are familiar with languages such as java,C#, consider using the go language to implement channels rather than the operator thread pool.

We modified the Web request handler to create a loaded jobstruct instance and send it to the JobQueue channel for the operator to pick up.

During the initialization of the website server, we create a Dispatcher, call Run () to create a pool of workers, and start listening for jobs that appear in the JobQueue.

Dispatcher: = NewDispatcher (MaxWorker)

Dispatcher.Run ()

Here is the code for dispatcher execution:

Note that we will provide the maximum number of operators instantiated and added to the operator pool. Because our project with a dockerized Go environment uses Amazon Elasticbeanstalk, we always try to follow the 12-factor methodology to configure systems in production and read these values from environment variables. This allows you to control the number of operators and the maximum value of the job queue, so we can quickly adjust these values without having to redeploy the cluster.

Var (

MaxWorker = os.Getenv ("MAX_WORKERS")

MaxQueue = os.Getenv ("MAX_QUEUE")

)

After deploying it, we immediately found that all the delay rates had dropped to insignificant numbers, and the system's ability to process requests increased sharply.

After the elastic load balancer warmed up completely for a few minutes, we saw that the ElasticBeanstalk application service was approaching 1 million requests per minute. Usually in the morning hours, traffic summits exceed 1 million requests per minute.

Once we deployed the new code, the number of servers was reduced from 100 to about 20.

After properly configuring the cluster and automatic scaling settings, we were able to reduce it to only 4x EC2 c4. If the CPU exceeds 90% for 5 minutes in a row, a new instance is generated by the large instance and the elastic auto-scaling setting.

After reading the above, do you have any further understanding of how the Go language processes 1 million requests per minute? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 242

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report