In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to write open source distributed message queue equeue with c #". In daily operation, I believe that many people have doubts about how to use c # to write open source distributed message queue equeue. I have consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to write open source distributed message queue equeue with c #". Next, please follow the editor to study!
Topic
A topic is a theme. In a system, we can divide messages into topic so that we can send messages to different queue through topic.
Queue
Under a topic, we can set up multiple queue, and each queue is what we usually call a message queue; because queue is completely subordinate to a particular topic, when we want to send a message, we always specify what topic the message belongs to. Then equeue will know how many queue there are under the topic. But which queue is it sent to? For example, if there are 4 queue under a topic, which queue should the message under this topic be sent to? There must be a process in which messages are routed. The current practice of equeue is that when sending a message, the user is required to specify the topic corresponding to the message and a parameter of type object to be used for routing. Equeue will get all the queue according to the topic, and then get the number of the queue to be sent through the hash code according to the object parameter, and then get the number of the queue to send, so as to know which queue to send. The process of routing the message is done on the side that sends the message, which is called producer. The reason why it is not done on the message server is because it allows users to decide for themselves how to route messages, with greater flexibility.
Producer
Is the producer of the message queue. We know that the essence of message queuing is to implement the publish-subscribe pattern, that is, the producer-consumer model. Producer production news, consumer consumption news. So the Producer here is used to produce and send messages.
Consumer
It is the consumer of the message queue, and a message can have multiple consumers.
Consumer Group
Consumer grouping may be a new concept for everyone. The reason for creating a consumer grouping is to achieve the cluster consumption that we will talk about below. A consumer group contains some consumers, and if these consumers want to spend in a cluster, then these consumers will consume the messages in that group on average.
Broker
Broker in equeue is responsible for message transfer, that is, receiving the message sent by producer, then persisting the message to disk, then receiving the request of pulling message sent by consumer, and then pulling the corresponding message to consumer according to the request. Therefore, broker can be understood as a message queue server, which provides services for receiving, storing and pulling messages. It can be seen that broker is the core for equeue, it must not hang up, once hung up, then producer,consumer will not be able to achieve publish-subscribe.
Cluster consumption
Cluster consumption refers to the average consumption of consumer under consumer group and queue under topic. On average, you can take a look at the architecture diagram below, which is briefly described in words first. If there are four queue under a topic, then there is currently a consumer group and there are four consumer under the packet, then each consumer is assigned to a queue under the topic, thus achieving the goal of average consumption of queue under the topic. If there are only two consumer under consumer group, then each consumer consumes 2 queue. If there are 3 consumer, then * each consumes 2 queue and the latter two consumes one queue each, thus achieving average consumption as much as possible. Therefore, it can be seen that we should try our best to make the number of consumer under consumer group consistent with or multiple of the number of queue of topic. In this way, the number of queue consumed by each consumer is always the same, so that the pressure on each consumer server will be about the same. The current premise is that the number of messages in each queue under this topic is always the same. We can ensure that the message is routed by hash according to the key defined by a user.
Broadcasting consumption
Broadcast consumption means that as long as a consumer subscribes to a message from a topic, it will receive messages from all the queue under that topic, regardless of the group of the consumer. So for radio consumption, consumer group has no practical significance. When consumer is instantiated, we can specify whether it is cluster consumption or broadcast consumption.
Consumption Progress (offset)
Consumption progress means that when a consumer in a consumer group is consuming a message in a queue, the equeue knows where the current consumption is by recording the consumption location (offset). So that after the consumer is restarted, the consumption will continue from this location. For example, if a topic has four queue and a consumer group has four consumer, each consumer is assigned a queue, and then each consumer consumes messages in its own queue. Equeue records the consumption progress of each consumer on its queue separately, so as to ensure that after each consumer restarts, you will know where to start spending next time. In fact, maybe after the next restart, the queue will not be consumed by the consumer, but by other consumer in the group, which is okay, because we have already recorded the consumption location of this queue. So you can see that the consumption location has nothing to do with consumer, the consumption location is completely an attribute of queue, which is used to record where it is currently consumed. Another important point is that a topic can be subscribed to by consumer in multiple consumer group. Even if the consumer in different consumer group consumes the same queue under the same topic, the consumption progress is stored separately. In other words, the consumption of consumer in different consumer group is completely isolated and not affected by each other. Another point is that for cluster consumption and broadcast consumption, the consumption progress is different. The consumption progress of cluster consumption is placed on broker, that is, the message queue server, while the consumption progress of broadcast consumption is stored on the local disk of consumer. The reason for this design is that, for cluster consumption, consumers of a queue may change, because the number of consumer under consumer group may increase or decrease, and then recalculate what queue should be consumed for each consumer. Is that understandable? So, when there is a consumer change for queue, how does the new consumer know where to start consuming the queue? If the consumption progress of this queue is stored on the previous consumer server, it will be difficult to get this consumption progress, because it is possible that the server is down or off the shelves. Because broker is always in service for all consumer, in the case of cluster consumption, the consumption location of the subscribed topic's queue is stored on the broker, and the storage is isolated according to different consumer group to ensure the complementary impact of the consumption progress of the consumer under different consumer group. Then, for broadcast consumption, since there will not be a situation where the consumer of queue will change, we do not need to let broker save the consumption location, so it is saved on consumer's own server.
What is equeue?
Through the figure above, we can understand equeue intuitively. This picture is taken from the design document of rocketmq, hehe. Since the design idea of equeue is exactly the same as that of rocketmq, I took it and used it. Each producer can send a message to a topic and send the message to a specific queue according to a routing policy (producer can be customized). Consumer can then consume messages in the queue under a particular topic. In the figure above, TOPIC_A has two consumers. These two consumers are in one group, so they should average consumption of queue under TOPIC_A. However, because there are three queue, * consumer gets 2 queue, and the second consumer gets 1. For TOPIC_B, since there is only one consumer, all queue under TOPIC_B is consumed by it. All topic information, queue information, and the message itself are stored on the broker server. This is not reflected in the picture above. The figure above focuses on the relationship between the four things producer,consumer,topic,queue, not the deployment architecture of the physical server.
Thinking about the key problems
1. How to communicate among producer, broker, consumer, consumer
Because it is implemented in c #, and because it is generally deployed in the local area network, in order to achieve high-performance communication, we can use asynchronous socket to communicate. Net itself provides good support for asynchronous socket communication; we can also use zeromq to achieve high-performance socket communication. Originally wanted to directly use zeromq to achieve the communication module, but then I learned the .net socket communication-related knowledge, found that it is not difficult, so I realized one, hehe. The advantage of my own implementation is that I can define the message protocol by myself. at present, this part of the implementation code is an independent basic class library that is independent of business scenarios in the ecommon basic class library. If you are interested, you can download it and look at the code. After some performance tests, it is found that the performance of the communication module is still good. One broker, four producer send messages to this broker at the same time, there is no problem with 4W messages per second, and more producer has not been tested.
two。 How messages are persisted
The main consideration in the aspect of message persistence is performance, and how to read messages quickly.
1. First of all, messages on a broker do not need to be kept on the broker server all the time, because they are always consumed. According to the design of Ali rocketmq, messages that have been consumed will be deleted by default once a day. So, we can understand that messages on broker should not grow indefinitely because they will be deleted on a regular basis. So you don't have to think about the problem that messages can't be put down on a broker.
two。 How to persist messages quickly? Generally speaking, I think there are two ways: 1) write disk files sequentially; 2) use ready-made key,value nosql products to store them; rocketmq currently uses its own way of writing files, the difficulty of this method is that writing files is more complex, because all messages are sequentially append to the end of the file, although the performance is very high, but the complexity is also very high. For example, all messages can not be written in a file, a file needs to be split after reaching a certain size, once split will produce a lot of problems, hehe. How to read after splitting is also a complicated problem. And because the file is written sequentially, we also need to record the starting position and length of each message in the file, so that when consumer consumes the message, it can get the message from the file according to offset. In short, there are a lot of issues to consider. If we use nosql to persist the message, we can save all the problems we encounter when writing the file, we just need to care about how to correspond the key of the message to the offset of the message in queue. Another question is, should the information in queue be persisted? First you need to figure out what's in the queue. When broker receives a message, it must be persisted first, and when it is finished, the message needs to be put into the queue. However, due to the limited memory, it is impossible for us to put the message directly into the queue. In fact, all we need to put is the key of the message in the nosql, or if the message is persisted with a file, it is the offset offset of the message in the file, that is, it is stored in that location of the file (such as which line number it is). So, in fact, queue is just an index of a message. Is it necessary to persist queue? It can be persisted, so that after all, when broker is restarted, the time to restore queue can be shortened. Does that need to be persisted synchronously with persistent messages? Obviously not, we can persist each queue asynchronously, and then when we restore the queue, we can first recover from the persisted part, and then supplement the rest with persistent messages so that the slow part of the queue due to asynchronous persistence can be equalized. Therefore, after the above analysis, the message itself is placed in nosql, and the queue is all in memory.
So how does the news last? I think the way to * is to make each message have a global sequence number. Once the message is written into nosql, the global sequence number of the message is determined, and then when we update the information of the corresponding queue, we send the global sequence number of the message to queue, so that queue can establish a mapping relationship between the local sequence number of the message and the global sequence number of the message. The related code is as follows:
Public MessageStoreResult StoreMessage (Message message, int queueId) {var queues = GetQueues (message.Topic); var queueCount = queues.Count; if (queueId > = queueCount | | queueId
< 0) { throw new InvalidQueueIdException(message.Topic, queueCount, queueId); } var queue = queues[queueId]; var queueOffset = queue.IncrementCurrentOffset(); var storeResult = _messageStore.StoreMessage(message, queue.QueueId, queueOffset); queue.SetMessageOffset(queueOffset, storeResult.MessageOffset); return storeResult; } 没什么比代码更能说明问题了,呵呵。上的代码的思路是,接收一个消息对象和一个queueId,queueId表示当前消息要放到第几个queue里。然后内部逻辑是,先获取该消息的topic的所有queue,由于queue和topic都在内存,所以这里没性能问题。然后检查一下当前传递进来的queueId是否合法。如果合法,那就定位到该queue,然后通过IncrementCurrentOffset方法,将queue的内部序号加1并返回,然后持久化消息,持久化的时候把queueId以及queueOffset一起持久化,完成后返回一个消息的全局序列号。由于messageStore内部会把消息内容、queueId、queueOffset,以及消息的全局顺序号一起作为一个整体保存到nosql中,key就是消息的全局序列号,value就是前面说的整体(被序列化为二进制)。然后,在调用queue的SetMessageOffset方法,把queueOffset和message的全局offset建立映射关系即可。***返回一个结果。messageStore.StoreMessage的内存实现大致如下: public MessageStoreResult StoreMessage(Message message, int queueId, long queueOffset) { var offset = GetNextOffset(); _queueCurrentOffsetDict[offset] = new QueueMessage(message.Topic, message.Body, offset, queueId, queueOffset, DateTime.Now); return new MessageStoreResult(offset, queueId, queueOffset); } GetNextOffset就是获取下一个全局的消息序列号,QueueMessage就是上面所说的"整体",因为是内存实现,所以就用了一个ConcurrentDictionary来保存一下queueMessage对象。如果是用nosql来实现messageStore,则这里需要写入nosql,key就是消息的全局序列号,value就是queueMessage的二进制序列化数据。通过上面的分析我们可以知道我们会将消息的全局序列号+queueId+queueOffset一起整体作为一条记录持久化起来。这样做有两个非常好的特性:1)实现了消息持久化和消息在queue中的位置的持久化的原子事务;2)我们总是可以根据这些持久化的queueMessage还原出所有的queue的信息,因为queueMessage里包含了消息和消息在queue的中的位置信息; 基于这样的消息存储,当某个consumer要消费某个位置的消息时,我们可以通过先通过queueId找到queue,然后通过消息在queueOffset(由consumer传递过来的)获取消息的全局offset,然后根据该全局的offset作为key从nosql拿到消息。实际上现在的equeue是批量拉取消息的,也就是一次socket请求不是拉一个消息,而是拉一批,默认是32个消息。这样consumer可以用更少的网络请求拿到更多的消息,可以加快消息消费的速度。 3.producer发送消息时的消息路由的细节 producer在发送消息时,如何知道当前topic下有多少个queue呢?每次发送消息时都要去broker上查一下吗?显然不行,这样发送消息的性能就上不去了。那怎么办呢?就是异步,呵呵。producer可以定时向broker发送请求,获取topic下的queue数量,然后保存起来。这样每次producer在发送消息时,就只要从本地缓存里拿即可。因为broker上topic的queue的数量一般不会变化,所以这样的缓存很有意义。那还有一个问题,当前producer***次对某个topic发送消息时,queue哪里来呢?因为定时线程不知道要向broker拿哪个topic下的queue数量,因为此时producer端还没有一个topic呢,因为一个消息都还没发送过。那就是需要判断一下,如果当前topic没有queue的count信息,则直接从broker上获取queue的count信息。然后再缓存起来,在发送当前消息。然后第二次发送时,因为缓存里已经有了该消息,所以就不必再从broker拿了,且后续定时线程也会自动去更新该topic下的queue的count了。好,producer有了topic的queue的count,那用户在发送消息时,框架就能把这个topic的queueCount传递给用户,然后用户就能根据自己的需要将消息路由到第几个queue了。 4.consumer负载均衡如何实现 consumer负载均衡的意思是指,在消费者集群消费的情况下,如何让同一个consumer group里的消费者平均消费同一个topic下的queue。所以这个负载均衡本质上是一个将queue平均分配给consumer的过程。那么怎么实现呢?通过上面负载均衡的定义,我们只要,要做负载均衡,必须要确定consumer group和topic;然后拿到consumer group下的所有consumer,以及topic下的所有queue;然后对于当前的consumer,就能计算出来当前consumer应该被分配到哪些queue了。我们可以通过如下的函数来得到当前的consumer应该被分配到哪几个queue。 public class AverageAllocateMessageQueueStrategy : IAllocateMessageQueueStrategy { public IEnumerable Allocate(string currentConsumerId, IList totalMessageQueues, IList totalConsumerIds) { var result = new List(); if (!totalConsumerIds.Contains(currentConsumerId)) { return result; } var index = totalConsumerIds.IndexOf(currentConsumerId); var totalMessageQueueCount = totalMessageQueues.Count; var totalConsumerCount = totalConsumerIds.Count; var mod = totalMessageQueues.Count() % totalConsumerCount; var size = mod >0 & & index < mod? TotalMessageQueueCount / totalConsumerCount + 1: totalMessageQueueCount / totalConsumerCount; var averageSize = totalMessageQueueCount 0 & & index < mod? Index * averageSize: index * averageSize + mod; var range = Math.Min (averageSize, totalMessageQueueCount-startIndex); for (var I = 0; I < range; iTunes +) {result.Add (totalMessageQueues [(startIndex + I)% totalMessageQueueCount]);} return result;}}
The implementation in the function does not analyze much. The purpose of this function is to return the queue to which the current consumer should be assigned based on the given input. The principle of distribution is equal distribution. Well, with this function, we can easily achieve load balancing. We can open a timed job for each running consumer. The job performs load balancing every once in a while, that is, executing the above function to get the * queue bound to the current consumer. Because each consumer has a groupName attribute that indicates which group the current consumer belongs to. Therefore, we can go to broker to get all the consumer; under the current group during load balancing. on the other hand, because each consumer knows which topic it subscribes to, with topic information, we can get the information of all queue under topic. With these two kinds of information, each consumer can do load balancing on its own. Take a look at the following code first:
_ scheduleService.ScheduleTask (Rebalance, Setting.RebalanceInterval, Setting.RebalanceInterval); _ scheduleService.ScheduleTask (UpdateAllTopicQueues, Setting.UpdateTopicQueueCountInterval, Setting.UpdateTopicQueueCountInterval); _ scheduleService.ScheduleTask (SendHeartbeat, Setting.HeartbeatBrokerInterval, Setting.HeartbeatBrokerInterval)
Within each consumer, three timed task,*** task are activated to indicate that the load balancer should be done regularly; the second task means to update the queueCount information of all topic subscribed to the current consumer regularly, and the queueCount information of * is saved locally; the third task indicates that the current consumer sends a regular heartbeat to the broker, so that the broker can know whether a consumer is still alive through the heartbeat, and all consumer information is maintained on the broker. Once a new consumer is added or found that the heartbeat is not sent in time, it will be considered that there is a new or dead consumer. Because all the consumer information is maintained on the broker, he can provide query services, such as querying the consumer under the group according to a consumer group.
Through these three scheduled tasks, the consumer's load balancing can be completed. Take a look at the Rebalance method first:
Private void Rebalance () {foreach (var subscriptionTopic in _ subscriptionTopics) {try {RebalanceClustering (subscriptionTopic);} catch (Exception ex) {_ logger.Error (string.Format ("[{0}]: rebalanceClustering for topic [{1}] has exception", Id, subscriptionTopic), ex);}
The code is very simple, which is to load balance the topic of each subscription. Take another look at the RebalanceClustering method:
The above code does not analyze much, that is, first get all the consumer according to consumer group and topic, and then sort the consumer. The reason for sorting is to ensure that the existing allocation does not change as much as possible during load balancing. The next step is to get all the queue under topic locally and sort them again according to queueId. Then call the above allocation algorithm to figure out which queue the current consumer should be assigned to. * call the UpdatePullRequestDict method to process the newly added or deleted queue. For the new queue, create a separate worker thread and start pulling messages from the broker; for the deleted queue, stop the corresponding work and stop pulling messages.
Through the above introduction and analysis, we all know how equeue achieves load balancing for consumers. We can see that because the updates of the queue under each topic are asynchronous and timed, and the load balancer itself is timed, and the information of the consumer maintained on the broker is not true, because each consumer sends a heartbeat to the broker is not sent in real time, but is sent, for example, every 5 seconds. All of these are designed asynchronously, so it is possible that the same queue may be consumed by two consumers at the same time in the process of load balancing. This is the so-called, we can only do a message to be consumed at least once, but not at the equeue level, a message will only be consumed once. In fact, the same way of thinking like rocketmq, abandoning the implementation that a message will only be consumed once (because it is too expensive and too complex, in fact, it is unlikely that a message will be consumed only once in a distributed environment), but instead to ensure that a message will be consumed at least once (that is, at least once). So with equeue, the application has to do its own idempotent processing of each message.
5. How to realize real-time message push
There are generally two ways to push messages in real time: push mode (push) and pull mode (pull). Push means that broker actively pushes messages to all consumers who subscribe to the topic; pull means that consumers take the initiative to pull messages on the broker; for push mode, the advantage of * * is real-time, because as soon as there is a new message, it will be immediately pushed to consumers. But there is a disadvantage is that if consumers do not have time to spend, it will also push messages to consumers, which will lead to consumer-side messages will be blocked. Through the way of pulling, there are two ways to achieve: 1) the way of rotation training, such as whether there is any new news every 5 seconds, the disadvantage of this way is that the news is not real-time, but the consumption progress is completely controlled by the consumers themselves. 2) Open a long connection to pull, that is, without rotation, the connection channel between the consumer and the broker has been maintained, and then the broker will use this channel to send the message to the consumer as soon as it has a new message.
At present, equeue uses the method of pulling messages through a long connection. Persistent connections are achieved through socket persistent connections. However, although a persistent connection is not open all the time, a timeout limit is also designed. For example, if a persistent connection does not exceed 15s or more than 15s, broker sends a reply to consumer, telling consumer that there is no new message. After receiving this reply, consumer knows to initiate the next persistent connection to pull. Then if within these 15s, there is a new message on broker, then broker can immediately take the initiative to use the long connection to notify the corresponding consumers and pass the message to the consumers. Therefore, it can be seen that when processing a consumer's request to pull a message on broker, if there is no new message currently, the socket connection will be hold, with a maximum of hold 15s and more than 15s, then a return message will be sent to tell the consumer that there is currently no message, and then the consumer will send pull message request again. Through this pull mode based on long connection, we can achieve two benefits: 1) real-time push of messages; 2) consumers control the progress of message consumption.
In addition, the automatic current limit function of consumers is also realized in equeue. That is, if there are a lot of messages on broker, that is, producers produce messages faster than consumers consume messages, then messages will be piled up on broker. At this time, when consumers pull messages, there will always be new messages, but consumers do not have time to deal with so many messages. So the equeue framework has a built-in flow limit (flow control, flow control) design, which allows you to configure an upper limit of messages accumulated on the consumer side, such as 3000, if the number exceeds this number (configurable), equeue will allow consumers to pull messages at a slower frequency. For example, how many milliseconds to delay (the delay time is configurable) before pulling. In this way, the purpose of flow control is realized simply.
6. How to deal with the failure of message consumption
As a message queue, it is always possible for consumers to throw exceptions when consuming messages, which is the case of message consumption failure in equeue. Through the introduction of the consumption progress above, we all know that each queue has a unique consumption progress for a particular consumer group. In fact, when a message is pulled locally to consumer, it may be consumed in two ways, one is parallel consumption, and the other is linear consumption.
Parallel consumption means that if 32 messages are pulled at one time, equeue will consume each message in parallel by starting task (that is, multithreading).
Linear consumption means that messages are consumed sequentially in a separate single thread in the same order as pulled.
For linear consumption, what if the previous message consumption fails, that is, an exception is thrown? The solution you might think of is to try again three times, but what if you fail after retrying? You can't spend the rest of the news because of this news, can you? He he! In this case, let's first talk about the treatment in rocketmq: when a consumption fails, it does not immediately retry, but sends the message directly to a retry queue on broker. After it is successfully sent, the next message can be consumed. Because once sent to the retry queue, it means that the message will always be consumed because the message will not be lost. But what if the retry queue sent to broker is not successful? This?! In fact, this kind of situation should not happen, if it happens, it is basically broker is dead, hehe.
In rocketmq, in this case, the failed message is put into the local memory queue and slowly consumed. Then continue to consume later messages. Now you must be very concerned about how queue's offset is updated? The concept of a sliding door is involved here. When a batch of messages are pulled from the broker to the consumers, they are not consumed immediately, but are first put into a local SortedDictionary,key that is the location of the message in the queue, and the value is the message itself. Because it is a sorted dictionary, the smallest message of key means the first message, and the message of * * is the message of * *. Then, whether it is parallel consumption or linear consumption, as long as a message is consumed, it is removed from the SortedDictionary. Each time a message is removed, the smallest key in the current SortedDictionary is always returned, and then we can determine whether the key has moved forward compared to the last time, and if so, update the offset of the queue. Because every time you remove a message, it always returns the smallest key in the current SortedDictionary, so, if the current offset is 3, and then the message with offset of 4 has failed to consume, it will not be removed. However, although these messages with an offset of 5 and 6 are successfully consumed, as long as the message with an offset of 4 is not removed, the smallest key will not move forward. This is the concept of the so-called sliding door. It's like a running bullet train on the track, and the offset moving forward is like the train moving forward. Because we want the offset to always move forward, we don't want the sliding door to stop moving because of a previous consumption failure message (that is, we always want the smallest key to get bigger and bigger), so we will try to make sure that the consumption failure message does not hinder the sliding door moving forward. That's why messages with failed consumption are put in the retry queue.
Another point to note: every time you successfully consume a message, you don't immediately tell broker to update offset, because the performance must be very low, and broker will be very busy. A better way is to update the offset of queue in local memory first, and then update the offset of * * to broker on a regular basis, such as 5s. Therefore, because of this asynchronism, it will also lead to the possibility that a message will be consumed repeatedly, because the offset on broker must be slower than the actual consumption, with a time lag of 5 seconds. Therefore, once again, the application must deal with the idempotent processing of the message! For example, in the enode framework, the idempotent treatment of command is done within the framework for each command message. Therefore, applications using the enode framework do not need to consider the idempotent treatment of command.
Parallel consumption and linear consumption mentioned above are actually the same for offset updates, because parallel consumption is nothing more than multithreading to remove successful consumption messages from SortedDictionary at the same time, while single thread is only a single thread to remove messages in SortedDictionary. So we need to ensure that the operation on SortedDictionary is thread-safe through the lock mechanism. Currently, ReaderWriterLockSlim is used to implement line-level security for method calls. Interested friends can take a look at the code.
* is also the key point, hehe. Equeue currently does not implement a retry queue that sends failed messages back to broker. This function will be considered in the future.
7. How to solve the single point problem of broker
This problem is more complicated. Currently, equeue does not support broker's master-salve or master-master, but a single point. I think a mature message queue, in order to ensure that when a broker is down, try to make sure that there are other broker can replace it, so as to make the reliability of the message queuing server. But the problem is too complicated. Rocketmq currently implements only the master-slave way. That is, as long as the main master is dead, the producer cannot send a message to the broker, because the broker of slave is read-only and cannot accept new messages directly, and the broker of slave can only be pulled by consumer.
At this point, the study on "how to write open source distributed message queue equeue with c #" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 295
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.