In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
Most people do not understand the knowledge points of this article "how to solve the problem of using Pipelining to accelerate query in Redis", so the editor summarizes the following contents, detailed contents, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this article, "how to solve the problem of using Pipelining to accelerate query in Redis".
Request/Response protocols and RTT
Redis is a TCP service in client-server mode, also known as the implementation of the Request/Response protocol.
This means that usually a request is completed by following two steps:
Client sends an operation command to Server to read the response value of Server from TCP's socket Socket, which is usually a blocking method
Server executes the operation command and returns the response value to Client
For instance
Client: INCR XServer: 1Client: INCR XServer: 2Client: INCR XServer: 3Client: INCR XServer: 4
Clients and Servers are connected through a network. This means that the network connection can be fast (such as a loopback network, that is, a native network) or slow (such as a multi-hop network between two hosts). Regardless of the network, it takes a certain amount of time for a packet to travel from Client to Server and then return to Client from Server.
This time is called RTT (Round Trip Time). When a Client needs to perform multiple consecutive requests (such as adding many elements to a list, or clearing many key-value pairs in the Redis), how does RTT affect performance? This is also very convenient to calculate. For example, if the RTT time is 250ms (assuming that the Internet connection speed is very slow), even if Server can process 100k requests per second, then a maximum of 4 requests per second can be accepted.
If it is a loopback network, the RTT will be particularly short (for example, the response time of the author's 127.0.0.1 RTT is 44ms), but it is also a lot of consumption when performing multiple consecutive writes.
In fact, we have other ways to reduce the consumption of this kind of scene, are you happy? Surprise?
Redis Pipelining
There is a feature in a Request/Response-style service: even if the Client does not receive the previous response value, it can continue to send new requests. This feature means that instead of waiting for a response from Server, we can send a lot of operation commands to Server first, and then read all the response values of Server at once.
This method is called Pipelining technology, which has been widely used in recent decades. For example, the implementation of multi-POP3 protocol supports this feature, which greatly improves the speed of downloading new emails from the server side.
Redis has supported this technology for a long time, so no matter what version you are running, you can use pipelining technology, such as here is one that uses the netcat tool:
$(printf "PING\ r\ nPING\ r\ nPING\ r\ n"; sleep 1) | nc localhost 6379+PONG+PONG+PONG
Now we don't need to pay the RTT cost for each request, but send three operation commands at a time. In order to facilitate intuitive understanding, take the previous instructions, the implementation order of using pipelining technology is as follows:
Client: INCR XClient: INCR XServer: 1Server: 2Server: 3Server: 4
Highlight (hit the blackboard): when client uses pipelining to send operation commands, the server side will force the use of memory to arrange the response results. Therefore, when using pipelining to send a large number of operation commands, it is best to determine a reasonable number of commands and send them to the server in batches, such as sending 10k operation commands, reading the response results, sending 10k operation commands, and so on. Although the time consuming is almost the same, the additional memory consumption will be the maximum required for the arrangement response of the 10k operation commands. (to prevent memory exhaustion, choose a reasonable value)
It's not just a matter of RTT
Pipelining is not the only way to reduce the consumption caused by RTT, but it does help you greatly increase the number of commands executed per second. The truth is that from the point of view of accessing the appropriate data structures and generating responses, not using pipelining is indeed inexpensive; but from the perspective of socket socket I hand O, the opposite is true. Because this involves read () and write () calls, you need to switch from user mode to kernel state. This kind of context switching can be very time-consuming.
Once pipelining technology is used, many operation commands will read from the same read () call, and a large number of replies will be distributed to the same write () call for write operations. Based on this, as the length of the pipe increases, the number of queries executed per second initially increases almost linearly, up to 10 times the benchmark without pipelining technology, as shown in the following figure:
Some real world code example
No translation, basically means that the use of pipelining has improved performance by a factor of 5.
Pipelining VS Scripting
Redis Scripting (version 2.6 + is available) can solve a large number of pipelining use cases more efficiently by using the script Scripting that does a lot of work on the server side. The biggest advantage of using script Scripting is that it consumes less performance when reading and writing, making operations such as reading, writing, and computing faster. When client needs to get the response result of a read operation before a write operation, pepelining is dwarfed. Sometimes, the application may need to send EVAL or EVALSHA commands when using pipelining, which is feasible, and Redis explicitly supports such SCRIPT LOAD commands. It guarantees that EVALSHA can be called without the risk of failure.
Appendix: Why are busy loops slow even on the loopback interface?
After reading the full text, you may also wonder why the following Redis test benchmark benchmark executes so slowly, even on Client and Server on the same physical machine:
FOR-ONE-SECOND: Redis.SET ("foo", "bar") END
After all, the Redis process runs on the same machine as the test benchmark benchmark, and there is no real latency and real network participation, isn't it that messages are copied from one place to another through memory? The reason is that processes do not run all the time in the operating system. The real scenario is that the system kernel is scheduled, and the process will not run until it is scheduled. For example, the test benchmark benchmark is allowed to run, read the response from Redis Server (related to the last executed command), and write a new command. The command will be in the socket of the loopback network, but in order to be read by Redis Server, the system kernel needs to schedule the Redis Server process (currently suspended in the system) over and over again. Therefore, due to the scheduling mechanism of the system kernel, even in the loopback network, network delay is still involved. In short, using loopback network testing is not a wise way to measure performance in a network server. You should avoid using this approach to test benchmarks.
The above is about the content of this article on "how to solve the problem of using Pipelining to accelerate query in Redis". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more related knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.