In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is the application scenario of java message queue". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn what the application scenario of java message queue is.
What is a queue
Queue (Queue) is a common data structure, and its biggest feature is first-in, first-out (First In First Out). As the most basic data structure, queue is widely used. For example, waiting in line at the railway station to buy tickets and so on. The following figure can be used to represent the queue:
Where A1, A2, and an represent the data in the queue. The data enters the queue from the end of the queue and then leaves the queue from the head of the queue.
What is message queuing
Message queue (Message Queue) is a distributed message container that uses queues (Queue) as the underlying storage data structure, which can be used to solve the communication between different processes and applications, also known as message middleware.
At present, the commonly used message queues are ActiveMQ, RabbitMQ, Kafka, RocketMQ, Redis and so on.
What's the difference between a message queue and a queue?
The only difference is that it is called a producer when it is queued and a consumer when it is out of queue.
Third, message queuing application scenario
Message queuing is used in a wide range of scenarios. Here are a few common scenarios.
1. Distributed scenario 1.1, asynchronous processing
Generally speaking, the programs we write are executed sequentially (that is, synchronously). For example, in the case of an order in an e-commerce system, the order of execution is as follows:
The user submits the order. Add points after the order is completed. SMS notification of changes in points.
It can be represented by the following flowchart:
If executed in the above order, if each service takes one second, then the client will take 3 seconds. For users, 3 seconds is obviously unbearable, so how do we solve it? We can solve this problem asynchronously. Take a look at the following flowchart:
In this way, the points service and SMS service operate asynchronously with threads, so it only takes 1 second for the client to complete. However, this asynchronous approach brings another problem: reduced concurrency. Because both the points service and SMS service need to open threads in the order service, more threads are opened, which will reduce the concurrency of the client accessing the order service, and may cause the actual time for the client to submit the order to exceed 1 second. So how to solve the problems caused by asynchrony? To use message queuing, look at the flow chart below:
In the above process, we add the role of a message queue. First, the client submits the order, and then writes the order to the message queue. The credit service and SMS service consume the messages in the message queue at the same time. In this way, there is no need for the order service to open the asynchronous thread, and the client can achieve a real time of 1 second.
1.2. Application decoupling
Let's take the e-commerce system as an example and take a look at the following flow chart:
The business logic in the figure above: the client initiates a request to create an order. When creating an order, we need to obtain inventory first, and then deduct inventory, so that there is a very close dependency between the order system and the inventory system. If the inventory system is down at this time, because the order system depends on the inventory system, the order system will not be used. So how to solve it?
Take a look at the following flowchart for using message queues:
In the above process, we joined the message queue. First, the client initiates a request to create an order, and the message of the order is written to the message queue, then the inventory system subscribes to the message queue, and finally updates the inventory system asynchronously. If there is a downtime in the inventory system, because the order system does not directly rely on the inventory system, the order system can normally respond to client requests. In this way, the application is decoupled.
1.3, flow peaking
For highly concurrent systems, at the peak of access, sudden traffic flows to the application system like a flood, especially some high concurrent write operations, which can paralyze the database server at any time and can no longer provide services.
The introduction of message queue can reduce the impact of burst traffic on the application system. The consumption queue is like a "reservoir", intercepting the flood in the upper reaches and reducing the peak flow into the downstream river, so as to achieve the purpose of reducing flood disasters.
The most common example in this respect is the second kill system. Generally speaking, the instantaneous traffic of flash sale activity is very high. If all the traffic rushes to the second kill system, it will crush the second kill system. Through the introduction of message queue, it can effectively buffer the sudden traffic and achieve the function of "clipping peak".
Let's use the second kill scenario to describe traffic peak shedding. First, take a look at the following flow chart:
In the above process, we call the second kill service as the upstream service, and the order service, inventory service and balance service as the downstream service. The client initiates the second kill request. After receiving the request sent by the client, the second kill service creates the order, modifies the inventory, and deducts the balance. This is the basic business scenario of the second kill.
If the downstream service can only handle 1000 concurrent requests at the same time, the upstream service can handle 10000 concurrent requests, while the client initiates 10000 requests, which exceeds the amount of concurrency that the downstream service can handle, so it will cause downtime of the downstream service. At this point, you can join the message queue to solve the problem of downtime. Take a look at the following flowchart for adding a message queue:
We add the message queue to the above flowchart, describing that after the service receives the 10000 requests sent by the client, all the requests are written to the message queue, and then the downstream service subscribes to the second kill request in the message queue. And then to perform their own business logic operations.
Let's take a simple example, the upstream service can still handle 10000 concurrent requests, and the downstream service can only handle 1000 concurrent requests, so at this time we will allow 1000 concurrent requests to be stored in the message queue. The upstream second kill service receives 10000 concurrent requests, while only 1000 requests can be stored in the message queue, so the extra requests will not be stored in the message queue and will be returned directly to the client to prompt "the system is busy, please wait!" . This is the so-called traffic peaking scenario. This is determined by the amount of concurrency that the downstream service can handle. Since the downstream service can only handle 1000 concurrent requests, only 1000 seconds can be stored in the message queue, and all the extra second kill requests are returned to the client prompt. This ensures the normal response of the downstream services, does not lead to downtime of the downstream services, and improves the availability of the system.
2. Log scenario optimizes log transmission
For the robustness of the program, we usually add various logging functions to the program, such as error log, operation log, etc., take a look at the following flow chart:
The above flowchart is the process of logging synchronously. Using the process of synchronous logging will increase the time taken by the whole process and easily cause business system downtime (if the database is damaged, the operation of logging to the database will cause errors). We can use message queuing to optimize the transmission of logs. Take a look at the following flowchart:
After joining the message queue, the time spent by the system can be shortened, and the function of application system decoupling can be achieved.
3. Timely communication scene chat room
The main function of message queue is to send and receive messages, and it has an efficient communication mechanism, so it is very suitable for message communication.
We can develop peer-to-peer chat systems based on message queues, or we can develop broadcast systems that broadcast messages to a large number of recipients.
Thank you for your reading. the above is the content of "what is the java message queuing application scenario". After the study of this article, I believe you have a deeper understanding of what the java message queuing application scenario is, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.