In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how MQ merges multiple messages into one message to send". In daily operation, I believe many people have doubts about how MQ merges multiple messages into one message. The editor consults all kinds of information and sorts out simple and useful operation methods. I hope it will be helpful to answer the doubts of "how MQ merges multiple messages into one message to send". Next, please follow the editor to study!
Why merge multiple messages into one message to send?
As mentioned earlier, in order to save costs. In terms of 50w ad clicks per minute, a month will generate 50,60,24,31w click messages, multiplied by 3 is the number of sqs requests per month. 3 represents sending, pulling and deleting messages, which costs about $26784 a month at a price of $0.40 per 100w request.
Since sqs limits the maximum size of a single message to 256k, according to the business scenario, it is estimated that it is impossible to send a message per click. Therefore, I combine 256requests into one message to send, or less than 256messages within one second into one message, so that the monthly fee can be directly divided by 256k, which is not a small number.
What kind of business scenario is suitable for this?
Merging a large number of messages into a single message causes message consumption to lose atomicity. You cannot guarantee that after 256 messages are merged into one message, all 256 messages can be consumed successfully or all failed, so the business must allow message consumption failure to be discarded directly. No matter how many successes or failures, the entire message needs to be deleted from the mq. The author considered this question before deciding whether to do so, but also considered the problem of failure retry, but I do not think it is necessary to pay for this probability, because a click in the case of non-asynchronous, failure is a failure.
How do you merge a large number of messages into a single message without affecting the high concurrency performance of the service?
In fact, no influence does not exist, it just makes it weak. After a long period of observation, I understand that the memory consumption of this high concurrency service is not high, and the maximum qps consumes about 1.5g of heap memory, while netty uses about 2g of direct memory. For 2-core 8g machines, there is enough memory to implement queue caching of messages.
When I configure multiple connections between the client and the server of Dubbo, I use the polling method to use the connection. At the same time, I also sign the design of netty's EventLoop to realize the combined sending of messages. I define a MesaageLoopGroup, a MesaageLoopGroup can be configured with how many MesaageLooper, and each MesaageLooper is a thread, and maintains a blocking queue, the default queue size is 102400, which is the maximum number of file handles I can configure a single process to open.
When clicking on a message to MesaageLoopGroup push, first use the atomic class to add 1 and the length of the MesaageLooper array to take the remainder, select a MesaageLooper. Then push the message to the blocking queue of the MesaageLooper.
The run method of each MesaageLooper implements an endless loop, taking messages from the blocking queue, and when the message equals 256, or blocking for more than 1 second, the obtained messages are merged into one message and sent to mq. If the blocking queue is full, push sends the message directly to mq. Therefore, if you use kill 9 to forcibly end the process when the service is restarted, only 1 second of data will be lost at most. Another reason for setting 1s is to control the real-time performance of messages.
One day after the grayscale online test, it is proved that this solution has little impact on the service, neither gc nor memory footprint can be seen that such a layer of logic has been added. The average number of requests per second is calculated as 50w, shared by four machines, and the average number of requests per second for each service is 2000.
Why use golang to achieve consumers?
However, the consumption of news is not smooth. One is because I used the golang implementation of message consumption, I am also just getting started, and I still feel awkward when writing code, and the other is that a message is composed of 256 messages.
There is actually a reason to use golang. The original plan was to allow consumers to take up less memory in order to parasitize consumers on the machines of other services and make full use of the machines of other services that consume less memory and have low cpu utilization. At the same time, use docker to achieve rapid deployment, so that the image of docker is smaller, and there is no need to install jdk or anything. In addition, it is to make use of the concurrent processing capability of go, so that consumers can consume messages at a speed that can catch up with the speed at which messages are generated.
Pay for entry-level golang
To make it easier to understand, I'll use java's thread pool to illustrate. Suppose the number of thread pool threads I configured is 512. Parasitic on the machines of other services need to give the host some face, can not eat all other people's cpu, resulting in the main service unavailable, so the number of threads combined with the consumption of messages, can not be more than half of the cpu utilization, and choose the number of 512.
Sqs supports pulling multiple messages at a time, and has a visibility timeout feature. When the message is pulled by the consumer, how long it will not be deleted, it may be pulled again next time, or can be pulled by other consumers. The visibility timeout I initially set was 60s.
At first, I started 5 threads to pull messages, with a maximum of 10 messages at a time. Then it is likely that 50 messages will be pulled at the same time. Since a message is formed by merging 256 messages, 512 threads can only consume at most 2 messages at a time, while the average consumption time of one message (merged) is 10 seconds, that is, a maximum of 12 messages are consumed in a minute, and the other 38 messages will be pulled by other consumers in a minute, so a large number of messages will be consumed repeatedly over time. News accumulates more and more.
I use the channel of golang to implement producers and consumers. The size of the channel can be set. When the channel is full, the messages pulled cannot be put into the channel, so the pull thread will be blocked, and only consumers can continue to put data from the channel. But the blocking time is less than the visibility timeout of the message, because I only delete the message from the mq when it starts consuming.
The later improvement is to adjust the number of pull threads of messages and the number of messages pulled each time according to the consumption capacity. It is also important to note that in order to ensure that messages are ready to start consumption at all times, it is best not to pull messages from the mq after consumption. However, this will also lead to another problem. After some messages are pulled locally, the channel is full and cannot be put in, while other idle consumption nodes are not available, resulting in a longer time for messages to be consumed. This requires a trade-off.
At this point, the study on "how MQ merges multiple messages into one message" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.