In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "rabbitmq how to ensure the sequence of messages". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Let me give you an example. We have done a mysql binlog synchronization system before, and the pressure is still very high. The daily synchronization data should reach hundreds of millions, that is, the data should be synchronized from one mysql library to another mysql library intact (mysql -> mysql). A common point is that, for example, big data teams need to synchronize a mysql library to do various complex operations on the data of the company's business system.
You add, delete and change a piece of data in mysql, corresponding to the addition, deletion and change of 3 binlog logs, and then these three binlogs are sent to MQ, and then consumed and executed in turn, at least to ensure that they are in order, right? Otherwise, it was originally: add, modify, delete; you changed the order to delete, modify, add, not all wrong.
Originally, this data was synchronized, and this data should have been deleted at the end; as a result, you made a mistake in this order, and finally this data was retained, and the data synchronization went wrong.
Let's look at two scenes that will get out of order:
RabbitMQ: One queue, multiple consumers. For example, the producer sends three pieces of data to RabbitMQ, in the order data1/data2/data3, pushing in a memory queue of RabbitMQ. Three consumers consume one of the three pieces of data from MQ, and consumer 2 finishes first, storing data2 in the database, followed by data1/data3. It's not obviously messy. fileKafka: Let's say we build a topic with three partitions. When the producer writes, he can actually specify a key. For example, if we specify an order id as the key, then the data related to this order will be distributed to the same partition, and the data in this partition must be sequential.
When consumers extract data from partitions, they must also be sequential. Here, the order is OK, there is no confusion. Then, we might have multiple threads in the consumer to process messages concurrently. Because if the consumer is a single-threaded consumer processing, and the processing is time-consuming, such as processing a message takes tens of ms, then only dozens of messages can be processed in one second, which throughput is too low. And if multiple threads run concurrently, the order may be out of order. file Solutions RabbitMQ
Split multiple queues, each queue a consumer, that is, more queues, it is really troublesome; or just a queue but corresponding to a consumer, and then this consumer internally queues with memory queues, and then distributes them to different workers at the bottom to handle.
Kafka a topic, a partition, a consumer, internal single-threaded consumption, single-threaded throughput is too low, generally do not use this. Write N memory queues, and all data with the same key will go to the same memory queue; then for N threads, each thread will consume a memory queue separately, so that sequencing can be guaranteed.
"rabbitmq how to ensure the sequence of messages" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.