In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail why the use of single thread in Redis is still so fast, the editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
Why does Redis use single thread? Cost of multithreading
In general, after multithreading, if there is no good system design, it is actually shown in the figure on the right (pay attention to the ordinates). When you first increase the number of threads, the system throughput will increase, and when you further increase the number of threads, the system throughput will grow slowly, and even decrease.
The key bottleneck is that there are usually shared resources that will be accessed by multiple threads at the same time in the system. In order to ensure the correctness of shared resources, additional mechanisms are needed to ensure thread safety, such as locking, which will bring additional overhead.
For example, take the most commonly used type of List as an example. Suppose Redis is multithreaded, and there are two threads An and B doing LPUSH and LPUSH operations on List, respectively. In order to make each execution have the same result, that is, [thread B takes out the data put in by thread A], the two processes need to be executed serially. This is the problem of concurrent access control of shared resources faced by multithreaded programming mode.
Concurrency access control has always been a difficult problem in multithreaded development: if a mutex lock is simply adopted, even if threads are added, most threads are waiting to acquire the mutex lock, parallel to serial, and the system throughput does not increase with the increase of threads.
At the same time, the addition of concurrent access control will also reduce the readability and maintainability of the system code, so Redis simply adopts the single-thread mode.
Why is Redis using single thread so fast?
The use of single threading is the result of many measures by Redis designers.
Most of the operations of Redis are done in memory.
Efficient data structures such as hash tables and hopping tables are adopted
The multiplexing mechanism is adopted to enable it to process a large number of client requests concurrently in the network IO operation, thus achieving high throughput.
Since Redis uses a single thread for IO and cannot be multiplexed if the thread is blocked, it is not hard to imagine that Redis must also be designed for potential blocking points for network and IO operations.
Potential blocking points of Network and IO Operation
In network communication, in order to process a Get request, the server needs to listen to the client request (bind/listen), establish a connection with the client (accept), read the request from the socket (recv), parse the client to send the request (parse), and finally return the result (send) to the client.
The most basic single-threaded implementation is to perform the above operations in turn.
The accept and recv operations marked red above are potential blocking points:
When Redis listens for a connection request but cannot establish a connection successfully, it will block here in the accept () function, and other clients will not be able to establish a connection with Redis at this time.
When Redis reads data from a client through recv (), it will block if the data never arrives
High performance IO Model based on Multiplexing
To solve the blocking problem in IO, Redis adopts Linux's IO multiplexing mechanism, which allows multiple listening sockets and connected sockets (select/epoll) to exist in the kernel at the same time.
The kernel always listens for connections or data requests on these sockets. Once a request arrives, it is handed over to Redis for processing, which achieves the effect of a single Redis thread processing multiple IO streams.
At this point, the Redis thread does not block on a particular client request processing, so it can connect to multiple clients and process requests at the same time.
Callback mechanism
Once select/epoll detects that a request arrives on the FD, it triggers the corresponding event to be put into a queue, and the Redis thread constantly processes the event queue, so the event-based callback is implemented.
For example, Redis registers accept and get callback functions for Accept and Read events. When the Linux kernel listens for a connection request or read data request, the Accept event and Read event will be triggered, and the kernel will call back the corresponding accept and get functions of Redis for processing.
Performance bottleneck of Redis
After the above analysis, although multiple client requests can be monitored at the same time through the multiplexing mechanism, Redis still has some performance bottlenecks, which we usually need to avoid in programming.
1. Time-consuming operation
Once any request takes a long time in Redis, it will affect the performance of the whole server. Subsequent requests will not be processed until the previous time-consuming request has been processed.
This requires us to avoid when designing business scenarios; Redis's lazy-free mechanism also puts the time-consuming operation of freeing memory into asynchronous threads.
two。 High concurrency scenario
When the concurrency is very large, there is a performance bottleneck in single-thread reading and writing client IO data. Although IO multiplexing mechanism is adopted, it can only read client data in turn with a single thread, and can not take advantage of CPU multi-core.
Redis can use CPU multi-core and multi-thread to read and write client data in 6.0. but only the read and write for the client is parallel, and the real operation of each command is single thread.
Other interesting questions related to Redis
Take this opportunity to ask a few interesting questions related to redis.
Why use Redis? isn't it good to access memory directly?
In fact, this item is not clearly defined, for some infrequent data, can be directly put into memory, not necessarily into the Redis, can be put into memory. Consistency problem: if a data is modified and the data is in local memory, only the data on one server may be modified. If we use Redis, we can access the Redis server to solve the consistency problem.
What if there is too much data to store in it? For example, what if I want to cache 100 gigabytes of data?
Here, we would also like to advertise that Tair is Taobao's open source distributed KV caching system, which inherits rich operations from Redis. In theory, the total amount of data is unlimited, and it has also been upgraded for availability, scalability and reliability. Interested friends can learn about it.
This is the end of this article on why Redis uses single thread so fast. I hope the above content can be helpful to you so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.