In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail the example analysis of Redis pipeline pipelining for you. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
Before we explain the pipeline, let's first take a look at the interaction of redis. An interaction of redis is initiated by the client and received by the server, so we continuously operate some instructions, as shown in the following figure:
The process of the client requesting an instruction to the server to return data to the server is very complicated, not only to ensure that the data can be transmitted quickly but also not to lose packets, so the network consumption of data transmission and data received by the client in the process of response to each request is very large, so how can we improve this performance?
As mentioned above, we spend a lot of money back and forth, so can we make an one-time request and receive it at one time? Of course, redis can use pipes to improve performance.
Redis clients and servers communicate with each other using the TCP protocol and have supported pipelining technology for a long time. The Redis pipeline (Pipeline) itself is not a technology directly provided by the Redis server. This technology is essentially provided by the client and has no direct relationship with the server.
It is worth noting that there is also an upper limit of pipeline space, and rational use is the best choice.
Pipeline pressure test
Next, let's test the performance of the pipeline. Redis comes with a stress testing tool, redis-benchmark, which can be used for pipeline testing.
First of all, we stress-test a normal set instruction, QPS is about 5w/s.
> redis-benchmark-t set-Q SET: 53975.05 requests per second
Let's add the pipe option-P parameter, which represents the number of parallel requests in a single pipe, as shown below: Pair2MagneQPS reached 9w/s.
> redis-benchmark-t set-P 2-Q SET: 94240.88 requests per second
Let's take a look at the Prun3 QPS reached 10w/s.
> redis-benchmark-t set-P 2-Q SET: 102354.15 requests per second
However, if we continue to improve the P parameter, we will find that QPS has not been able to go up. Why is that?
Because the processing capacity of CPU has reached a bottleneck here, and the single-threaded CPU of Redis has soared to 100%, it cannot be further improved.
In-depth understanding of the nature of pipeline
Next, let's take an in-depth analysis of the process of requesting interaction.
The figure above is a complete flow chart of request interaction:
1. The client process calls write to write the message to the send buffer send buffer assigned to the socket by the operating system kernel.
2. The kernel of the client operating system sends the buffered content to the network card, and the network card hardware sends the data to the server's network card through "gateway routing".
3. The kernel of the server operating system puts the data of the network card into the receiving buffer recv buffer allocated by the kernel for the socket.
4. The server process calls read to extract the message from the receive buffer for processing.
5. The server process calls write to write the response message to the send buffer send buffer assigned by the kernel to the socket.
6. The kernel of the server operating system sends the buffered content to the network card, and the network card hardware sends the data to the client's network card through "gateway routing".
7. The kernel of the client operating system puts the data of the network card into the receive buffer recv buffer allocated by the kernel for the socket.
8. The client process calls read to take the message from the receive buffer and return it to the upper business logic for processing.
9. End.
At first we thought that the write operation would not return until the other party received the message, but in fact this is not the case. The write operation is only responsible for writing the data to the send buffer of the local operating system kernel and then returning. The rest is left to the operating system kernel to asynchronously send the data to the target machine. But if the send buffer is full, you need to wait for the buffer to free up, which is the real time consuming of the write operation IO operation. At first we thought that the read operation was pulling data from the target machine, but in fact this is not the case. The read operation is only responsible for pulling the data out of the receive buffer of the local operating system kernel.
So for a simple request like value=redis.get (key), the write operation is almost time-consuming and will be returned directly to the send buffer, while read will be more time-consuming, because it has to wait for the response message after the message has been routed through the network to the destination machine, and then sent back to the current kernel read buffer before it can be returned. This is the real cost of a network going back and forth. For pipes, continuous write operations are not time-consuming at all, and then the first read operation waits for the back-and-forth overhead of a network, and then all response messages have been sent back to the kernel's read buffer, and subsequent read operations can get the result directly from the buffer and return in an instant.
This is the nature of the pipeline, which is not a feature of the server, but a huge performance improvement brought about by the client by changing the order of reading and writing.
This is the end of this article on "sample Analysis of Redis Pipeline pipelining". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.