Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the problem of RabbitMq message queuing Qos Prefetch message congestion

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of "how to solve the problem of Qos Prefetch message congestion in RabbitMq message queue". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

ConnectionFactory: factory class for creating connection

Connection: simply understood as socket

Channel: the interface that interacts with mq, which defines queue, exchange and binds queue, exhange, and other interfaces.

Then there is the interaction class with mq

Exchange: simply regard routing as routing, type is not the key point, just look at the official website.

Queue: the client listens to queue, not exchange, but the prerequisite for using queue is to bind exchange and queue. Java queue tool class should be easy to use, queue is divided into write and read, each can have its own frequency, write fast and read slowly, easy to jam; write slow and read fast and easily cause consumers' leisure.

Prefetc: an important but easily overlooked indicator is also the problem encountered this time.

Prefetch and message delivery

Prefetch refers to the maximum number of unacked messages that a single consumer can consume.

How to understand it?

Mq sets a buffer for each consumer, the size of which is prefetch. Each time a message is received, MQ pushes the message to the cache and then to the client. When an ack message is received (consumer issues a baseack instruction), mq vacates a location from the buffer and adds a new message. But at this point, if the buffer is full, MQ will go into a blocking state.

To be more specific, assuming that the prefetch value is set to 10, there are two consumer. In other words, each consumer will prefetch 10 messages from the queue at a time to the local cache waiting for consumption. At the same time, the number of unacked of the channel becomes 20. The order of Rabbit delivery is to deliver 10 message for consumer1 first, and then 10 message to consumer2. If there is a new message to be delivered at this time, first determine whether the number of unacked of the channel is equal to 20. If so, the message will not be delivered to the consumer, and the message will stay in the queue. After that, consumer ack,unacked a message which is now equal to 19 consumer Rabbit to determine which consumer has a unacked of less than 10 and then delivers it to which Rabbit.

The problem I encountered was a careless programmer who handled some messages like this when writing code.

If (success) {channel.basicAck (message.getMessageProperties (). GetDeliveryTag (), false);} else {logger.error ("# The message is not delete from queue: {}", body);}

First of all, he said that the ack mechanism is set to manual, and then his understanding is that if a successful message is processed, ack is given to MQ, hoping that MQ can delete the completed data. Otherwise, the retained data is processed again.

The misunderstanding here is the understanding of ack. In case of failure, if you need to let the program continue to process, you should use basicNack and tell mq to put the message on the queue again.

If (success) {channel.basicAck (message.getMessageProperties (). GetDeliveryTag (), false);} else {channel.basicNack (message.getMessageProperties (). GetDeliveryTag (), false, true);}

In the case of unexpected client downtime, the data will not be deleted without the ack server, but after consumer restarts, it will be a new consumer for the server, that is, its buffer will be reset to the original n-prefetch, so this problem has been tested and passed by the careless little brother.

What should the size of prefetch be?

This article gives very good advice. Let me briefly explain my understanding.

Ideally, calculate the time when the MQ SERVER gets the message from the buffer and pushes it to the consumer side, plus the time when the consumer side processes the ack message to MQ server, assuming 100ms, where the consumer side charges 10ms for processing business calls.

Here we can get our prefetch = 100ms / 10ms = 10, that is, the total time of messages going back and forth / the time of business processing. Here we are required to prefetch > = 10. Generally speaking, the calculation of this time will not be too accurate, only Aunt Mao's, so the prefetch is generally a little bigger. But this value can not be too large, otherwise the consumer side will be idle.

So if you make sure that all the messages are ack, but there are still longer jams, you can either increase the prefetch, or add more machines, or reduce the business processing time. At first, it is recommended that you use or use a thread pool to handle these business logic.

This is the end of "how to solve the problem of Qos Prefetch message congestion in RabbitMq message queuing". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 294

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report