In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the relevant knowledge of "summing up RabbitMQ". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Preface
RabbitMQ is based on the AMQP protocol and can be transferred between different languages by using a common protocol.
Core concepts of AMQP protocol
Server: also known as broker, accepts client connections and implements AMQP entity services.
Connection: connection and specific broker network connection. A complete Java interview dictionary has been sorted out and PDF has been compiled into a document.
Channel: network channel, where almost all operations are done in channel, and channel is the channel for reading and writing messages. The client can establish multiple channel, with each channel representing a session task.
Message: messages, data passed between the server and the application, made up of properties and body. Properties can modify the message, such as message priority, latency and other advanced features; body is the content of the message entity.
Virtual host: virtual host for logical isolation, routing of top-level messages. A Virtual host can have several Exchange and Queue, and the same Virtual host cannot have an Exchange or Queue with the same name.
Exchange: the switch accepts messages and forwards them to the bound queue according to the routing key.
Virtual connection between banding:Exchange and Queue, which can include routing key in binding
Routing key: a routing rule based on which the virtual machine determines how to route a message.
Queue: message queue, the queue used to store messages.
Exchange
Switch type, direct, topic, fanout, headers,durability (required for persistence true) auto delete when the queue on the last bound Exchange is deleted, the Exchange is also deleted.
Direct Exchange, all messages sent to Direct Exchange can be forwarded to the Queue,Direct Exchange specified in RouteKey using the default Exchange (default Exchange), and the default Exchange binds all queues, so Direct can be bound directly with the Queue name (as routing key). Or the routing key of the consumer and the producer exactly match.
Toptic Exchange, which means that messages sent to Topic Exchange are forwarded to the Queue of the specified topic in all concerned Routing key. Exchange makes a fuzzy match between routing key and a Topic, and the queue needs to be bound to a topic. The so-called fuzzy matching can use wildcards, "#" can match one or more words, "log.#" can match only one word, for example, "log.#" can match "log.info.test"log." it can only match log.error.
Fanout Exchange: do not handle routing keys, simply bind the queue to the switch. All messages sent to the switch are sent to the queue bound to the switch. Fanout forwarding is the fastest.
How to ensure 100% delivery of messages? what is the reliable delivery on the production side?
Ensure the successful delivery of the message
Ensure the successful reception of MQ nodes
The sender MQ node (broker) receives the message acknowledgement reply
Perfect the message to compensate the mechanism
Reliability delivery guarantee scheme
The message falls into the library and marks the message.
In high concurrency scenarios, each db operation consumes performance per session. We use delay queues to reduce one database operation.
Message idempotency
I operate on an action, and we are willing to perform 1000 times, and the result must be the same for all 1000 actions. For example, the result of a thousand update count-1 operations executed in a single thread mode is the same, so this update operation is idempotent. If concurrency does not do thread-safe processing, the result of a thousand update operations may not be the same, so the update operation in the case of concurrency is not an idempotent operation. Corresponding to the message queue, that is, even if we receive more than one message, the effect is the same as consuming one message.
How to avoid repeated consumption of messages in the case of high concurrency
Unique id+ plus fingerprint code, using the database primary key to remove weight. Advantages: simple implementation disadvantages: high concurrency has data writing bottleneck.
Use the atomicity of Redis as an internship. Using Redis for idempotency is a problem to be considered.
Whether to drop the database, and how to ensure that the data and cache are idempotent (how can Redis and database succeed and fail at the same time)?
If you don't drop the database, how do you put it in Redis? is this the synchronization strategy between Redis and database? And is it 100% successful if you put it in the cache?
Confirm confirmation message, Return return message
Understand confirm message confirmation mechanism
The confirmation of the message means that after the producer receives the delivery message, if the Broker receives the message, it will give our producer a reply, and the producer accepts the reply to confirm whether the broker has received the message.
How to implement confirm confirmation messages.
Turn on confirmation mode on Channel: channel.confirmSelect ()
Add listener to channel: addConfirmListener, listen for the results of success and failure, resend the message or log the result.
Return message mechanism
The Return message mechanism handles some non-routable messages, our producers send the messages to a queue by specifying an Exchange and Routinkey, and then we consumers monitor the queue for consumption processing!
In some cases, if we send a message when Exchange does not exist or the specified route key route cannot be found, then if we need to listen for this kind of unreachable message, we need to use Return Listener!
If Mandatory is set to true, the listener will receive a message that the route is unreachable and process it. If set to false,broker, the message will be automatically deleted.
Consumer-side custom monitoring consumer-side flow limit
Suppose we have a scenario, first of all, we have a rabbitMQ server with tens of thousands of messages unconsumed, and then we open any consumer client, and there will be: a huge amount of messages will be pushed in an instant, but our consumers can't process so much data at the same time.
This will cause your service to collapse. There will also be problems in other situations, such as the mismatch between your producer and consumer capabilities, the production side generates a large number of messages in the case of high concurrency, and the consumer side cannot consume so many messages.
RabbitMQ provides a function of qos (quality of Service Assurance), that is, if a certain number of messages (setting qos through consumer or Channel) are not confirmed, no new consumption will be made under the premise of non-automatic confirmation.
Void basicQOS (unit prefetchSize,ushort prefetchCount,Boolean global) method.
The size limit of a single message in prefetchSize:0. 0 means no limit. Generally speaking, there is no limit.
PrefetchCount: set a fixed value to tell rabbitMQ not to push more than N messages to a consumer at the same time, that is, once there are N messages without ack, consumer will drop block until there is a message ack
Whether global:truefalse uses the above settings for channel, that is, whether the limits set above are for the channel level or the consumer level.
Consumer ack and return to queue
When the consumer side carries on the consumption, if because of the business anomaly, we can carry on the log record, and then carry on the compensation! (you can also add the maximum number of attempts)
If there are serious problems such as server downtime, then we need to manually ack to ensure the success of consumer consumption!
The message is returned to the queue
To return to the queue is to re-deliver messages to broker if they are not processed successfully.
In practical applications, re-queuing is generally not enabled.
TTL queues / messages
TTL time to live time to live.
Supports the expiration time of the message, which can be specified when the message is sent.
The queue expiration time is supported. When the message is queued, the message will be cleared automatically as long as it exceeds the queue timeout configuration.
Dead letter queue
Dead letter queue: DLX,Dead-Letter-Exchange
With DLX, when a message becomes a dead message in a queue, it can be re-publish to another Exchange, which is DLX.
There are several situations where a message becomes a dead letter:
Message is rejected (basic.reject/basic.nack) and requeue=false (not returned to queue)
TTL expires
The queue reaches the maximum length
DLX is also a normal Exchange, no different from a normal Exchange, it can be specified on any queue, it is actually setting the properties of a queue. When there is a dead letter in this queue, RabbitMQ will automatically republish the message to Exchange, which will be routed to another queue. Messages in this queue can be monitored and processed accordingly, which can make up for the function of the immediate parameter previously supported by rabbitMQ.
The setting of dead letter queue
Set Exchange and Queue, and then bind
Exchange: dlx.exchange (custom name)
Queue: dlx.queue (custom name)
Routingkey: # (# means that any dead message in routingkey will be routed)
Then the switch, queue, and binding are declared normally, but we add a parameter to the queue:
Arguments.put ("x-dead-letter-exchange", "dlx.exchange")
RabbitMQ cluster mode
Master / slave mode: implement rabbitMQ highly available clusters. Generally, this mode is easy to use and easy to use when the concurrency and data are small. Also known as warren mode. (different from master-slave mode, master node in master-slave mode provides write operation, slave node provides read operation, and slave node in master-slave mode does not provide any read and write operations, but only makes backups.) if the master node downtime, the slave node will automatically switch to the master node to provide services.
Cluster mode: the classic method is Mirror mode, which ensures that 100% data is not lost, and it is relatively easy to implement.
Mirror queue is a highly available solution for rabbitMQ data, mainly to achieve data synchronization. Generally speaking, data synchronization is achieved by 2-3 nodes. (for 100% message reliability solutions, it is usually 3 nodes.) A complete version of Java interview treasure book PDF has been organized into documents.
Federation plug-in is a high-performance plug-in that transmits messages between Brokers without building Cluster. Federation can transfer messages between brokers or cluster. Both sides of the connection can use different users or virtual host, or they can use different versions of erlang or rabbitMQ. The federation plug-in can use the AMQP protocol as the communication protocol and can accept discontinuous transmissions.
Federation Exchanges can be regarded as Downstream actively pulling messages from Upstream, but it does not pull all messages. It must be an Exchange with a clearly defined Bindings relationship on Downstream, that is, there is an actual physical Queue to receive messages before messages are pulled from Upstream to Downstream.
Using the AMQP protocol to implement inter-agent communication, Downstream groups the binding relationships together, and bind / unbind commands are sent to the Upstream switch.
Therefore, Federation Exchange only receives messages with subscriptions.
HAProxy is an agent software that provides high availability, load balancing and applications based on TCP (layer 4) and HTTP (layer 7). It supports virtual hosts. It is a free, fast and reliable solution. HAProxy is especially suitable for heavily loaded web sites, which usually require session persistence or seven-tier processing. HAProxy runs on today's hardware and can support tens of thousands of concurrent connections. And its mode of operation makes it easy and secure to integrate into your current architecture while protecting your web server from being exposed to the network.
Why is HAProxy performance so good?
Single-process and event-driven models are significantly reduced. Context switching overhead and memory footprint.
Where available, the single buffer (single buffering) mechanism can read and write without copying any data, which saves a lot of CPU clock cycles and memory bandwidth.
With the help of Linux 2.6 (> = 2.6.27.19). HAProxy can achieve zero copy forwarding (Zero-copy forwarding) with splice () system call on, and zero copy startup (zero-starting) can be achieved in OS with Linux 3.5 and above.
The memory allocator can achieve instant memory allocation in a fixed-size memory pool, which can significantly reduce the time it takes to create a session.
Tree storage: focusing on using the elastic binary tree developed by the author many years ago, it realizes the low overhead of O (log (N)) to maintain timer commands, keep running queue commands and manage polling and minimum connection queues.
KeepAlive
KeepAlived software mainly realizes the high availability function through VRRP protocol. VRRP is the abbreviation of Virtual Router RedundancyProtocol (Virtual Router redundancy Protocol). The purpose of VRRP is to solve the problem of single point of failure of static routes. it can ensure that the whole network can run continuously when individual nodes are down, so the Keepalived-aspect not only has the function of configuring and managing LVS, but also has the function of health check of the nodes under LVS. On the other hand, it can also realize the high availability function of the system network service.
The role of keepAlive
Manage LVS load balancing software
To realize the health check of LVS cluster nodes
High availability as a system network service (failover)
How to achieve High availability with Keepalived
Failover between Keepalived highly available service pairs is achieved through VRRP (Virtual Router Redundancy Protocol, Virtual Router redundancy Protocol). A complete Java interview dictionary has been sorted out and PDF has been compiled into a document.
When the Keepalived service is working normally, the primary Master node will constantly send heartbeat messages (multicast) to the slave node to tell the standby Backup node that it is still alive. When the primary Master node fails, it cannot send heartbeat messages, so the standby node cannot continue to detect the heartbeat of the incoming autonomous Master node, so it calls its own takeover program to take over the IP resources and services of the primary Master node.
When the primary Master node recovers, the standby Backup node will release the IP resources and services that it takes over when the primary node fails, and return to the original standby role.
This is the end of the summary of RabbitMQ. Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.