Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the principle of Redis threading model?

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "what is the principle of the Redis threading model". In daily operation, I believe that many people have doubts about the principle of the Redis threading model. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "what is the principle of the Redis threading model?" Next, please follow the editor to study!

I. Overview

As we all know, Redis is a high-performance data storage framework. In high-concurrency system design, Redis is also a key component and a sharp tool for us to improve system performance. It is more and more important to deeply understand the principle of high performance of Redis. Of course, the high performance design of Redis is a systematic project, which involves a lot of content. This paper focuses on the IO model of Redis and the threading model based on IO model.

Starting from the origin of IO, we talked about blocking IO, non-blocking IO, and multiplexing IO. Based on multiplexing IO, we also sort out several different Reactor models and analyze the advantages and disadvantages of several Reactor models. Based on the Reactor model, we begin to analyze the IO model and threading model of Redis, and summarize the advantages and disadvantages of the Redis threading model, as well as the subsequent Redis multithreading model scheme. The focus of this paper is to sort out the design idea of the Redis thread model, and straighten out the design idea, which is one-size-fits-all.

Note: the code in this article is pseudo-code, mainly to indicate that it cannot be used in a production environment.

Second, the development history of network IO model.

We often talk about the network IO model, which mainly includes blocking IO, non-blocking IO, multiplexing IO, signal-driven IO, asynchronous IO. This paper focuses on the content related to Redis, so we focus on the analysis of blocking IO, non-blocking IO, multiplexing IO to help you better understand the Redis network model.

Let's look at the picture below first.

2.1 blocking IO

There are two kinds of blocking IO that we often talk about, one is single-threaded blocking, the other is multithreaded blocking. There are actually two concepts, blocking and threading.

Blocking: the current thread will be suspended before the result of the call is returned, and the calling thread will not return until it gets the result.

Threads: the number of threads in the system call.

Such as establishing a connection, reading, and writing all involve system calls, which is itself a blocking operation.

2.1.1 single thread blocking

The server handles it with a single thread, and when the client request comes, the server uses the main thread to handle connection, read, write and other operations.

The following code simulates the blocking mode of a single thread

Import java.net.Socket; public class BioTest {public static void main (String [] args) throws IOException {ServerSocket server=new ServerSocket (8081); while (true) {Socket socket=server.accept (); System.out.println ("accept port:" + socket.getPort ()); BufferedReader in=new BufferedReader (new InputStreamReader (socket.getInputStream (); String inData=null Try {while ((inData = in.readLine ())! = null) {System.out.println ("client port:" + socket.getPort ()); System.out.println ("input data:" + inData); if ("close" .equals (inData)) {socket.close () } catch (IOException e) {e.printStackTrace ();} finally {try {socket.close ();} catch (IOException e) {e.printStackTrace () }}}

We are going to use two clients to initiate connection requests at the same time to simulate the phenomenon of single-thread blocking mode. At the same time, the connection is initiated. Through the server log, we find that the server only accepts one of the connections at this time, and the main thread is blocked on the read method of the previous connection.

We try to close the first connection and look at the second connection. What we want to see is that the main thread returns and the new client connection is accepted.

It is found from the log that after the first connection is closed, the request for the second connection is processed, that is, the second connection request is queued until the main thread is awakened to receive the next request, as we expected.

Not only do you have to ask at this time, why?

The main reason is that the three functions of accept, read and write are blocked. When the main thread is called by the system, the thread is blocked, and the connections of other clients can not be responded.

Through the above process, we can easily find the shortcomings of this process, the server can only handle one connection request at a time, the CPU is not fully utilized, and the performance is relatively low. How to make full use of the multi-core features of CPU? It's natural to think of multithreaded logic.

2.1.2 Multithreaded blocking

For engineers, the code explains everything and goes directly to the code.

BIO multithreading

Package net.io.bio; import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;import java.net.ServerSocket;import java.net.Socket; public class BioTest {public static void main (String [] args) throws IOException {final ServerSocket server=new ServerSocket (8081); while (true) {new Thread (new Runnable () {public void run () {Socket socket=null) Try {socket = server.accept (); System.out.println ("accept port:" + socket.getPort ()); BufferedReader in=new BufferedReader (new InputStreamReader (socket.getInputStream (); String inData=null While ((inData = in.readLine ())! = null) {System.out.println ("client port:" + socket.getPort ()); System.out.println ("input data:" + inData) If ("close" .equals (inData)) {socket.close ();}} catch (IOException e) {e.printStackTrace () } finally {}. Start ();}

Similarly, we initiate two requests in parallel

Both requests are accepted, and two new threads are added to the server to handle the client connection and subsequent requests.

We use multithreading to solve the problem that the server can only handle one request at the same time, but at the same time, it brings a problem: if the client has more connections, the server will create a large number of threads to process the request. but the thread itself is more resource-consuming, creation and context switching are more resource-consuming, how to solve it?

2.2 non-blocking

If we put all the Socket (file handle, then replace the concept of fd with Socket, minimize the concept and reduce the reading burden) into the queue, only one thread is used to rotate the state of all Socket, and take it out if it is ready, will it reduce the number of threads on the server side?

Let's take a look at the code, pure non-blocking mode, we basically don't use it, in order to demonstrate the logic, we simulate the relevant code as follows

Package net.io.bio; import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;import java.net.ServerSocket;import java.net.Socket;import java.net.SocketTimeoutException;import java.util.ArrayList;import java.util.List; import org.apache.commons.collections4.CollectionUtils; public class NioTest {public static void main (String [] args) throws IOException {final ServerSocket server=new ServerSocket (8082); server.setSoTimeout (1000); List sockets=new ArrayList () While (true) {Socket socket = null; try {socket = server.accept (); socket.setSoTimeout (500); sockets.add (socket); System.out.println ("accept client port:" + socket.getPort ()) } catch (SocketTimeoutException e) {System.out.println ("accept timeout") } / / simulate non-blocking: poll connected socket. Each socket waits for 10MS. If there is any data, it will be processed, and countless data will be returned. Continue to poll if (CollectionUtils.isNotEmpty (sockets)) {for (Socket socketTemp:sockets) {try {BufferedReader in=new BufferedReader (new InputStreamReader (socketTemp.getInputStream (). String inData=null; while ((inData= in.readLine ())! = null) {System.out.println ("input data client port:" + socketTemp.getPort ()); System.out.println ("input data client port:" + socketTemp.getPort () + "data:" + inData) If ("close" .equals (inData)) {socketTemp.close ();} catch (SocketTimeoutException e) {System.out.println ("input client loop" + socketTemp.getPort ()) }}

System initialization, waiting for connection

Two client connections are initiated and the thread begins to poll for data in both connections.

After the two connections enter data respectively, the polling thread finds that the data is ready and starts the relevant logical processing (either single-thread or multi-thread).

Then use a flow chart to help explain (the system actually uses a file handle, and now it is replaced by Socket, which is convenient for everyone to understand).

The server has a special thread responsible for polling all Socket to confirm whether the operating system has completed the relevant events, and if so, return processing. If there is no further polling, let's think about it together. What problems do you bring at this time?

CPU idling, system calls (each poll involves a system call to confirm whether the data is ready through kernel commands), resulting in a waste of resources, is there a mechanism to solve this problem?

2.3 IO Multiplexing

There is no special thread on the server side to do the polling operation (the application side is not the kernel), but is triggered by events. When the relevant read, write and connection events come, the server thread is actively aroused to carry out the relevant logic processing. The related code is simulated as follows

IO multiplexing

Import java.net.InetSocketAddress;import java.nio.ByteBuffer;import java.nio.channels.SelectionKey;import java.nio.channels.Selector;import java.nio.channels.ServerSocketChannel;import java.nio.channels.SocketChannel;import java.nio.charset.Charset;import java.util.Iterator;import java.util.Set; public class NioServer {private static Charset charset = Charset.forName ("UTF-8"); public static void main (String [] args) {try {Selector selector = Selector.open () ServerSocketChannel chanel = ServerSocketChannel.open (); chanel.bind (new InetSocketAddress (8083)); chanel.configureBlocking (false); chanel.register (selector, SelectionKey.OP_ACCEPT); while (true) {int select = selector.select (); if (select = = 0) {System.out.println ("select loop") Continue;} System.out.println ("os data ok"); Set selectionKeys = selector.selectedKeys (); Iterator iterator = selectionKeys.iterator (); while (iterator.hasNext ()) {SelectionKey selectionKey = iterator.next () If (selectionKey.isAcceptable ()) {ServerSocketChannel server = (ServerSocketChannel) selectionKey.channel (); SocketChannel client = server.accept (); client.configureBlocking (false); client.register (selector, SelectionKey.OP_READ) / / continue to receive the connection event selectionKey.interestOps (SelectionKey.OP_ACCEPT);} else if (selectionKey.isReadable ()) {/ / get SocketChannel SocketChannel client = (SocketChannel) selectionKey.channel () / / define buffer ByteBuffer buffer = ByteBuffer.allocate (1024); StringBuilder content = new StringBuilder (); while (client.read (buffer) > 0) {buffer.flip (); content.append (charset.decode (buffer)) } System.out.println ("client port:" + client.getRemoteAddress (). ToString () + ", input data:" + content.toString ()); / / clear buffer buffer.clear ();} iterator.remove () } catch (Exception e) {e.printStackTrace ();}

Create two connections at the same time

Two connections are created without blocking

Receive read and write without blocking

Then use a flow chart to help explain (the system actually uses a file handle, and now it is replaced by Socket, which is convenient for everyone to understand).

Of course, there are several ways to implement the multiplexing of the operating system, we often use select (), the epoll pattern is not explained too much here, those who are interested can check the relevant documents, the development of IO is followed by asynchronous, event and other patterns, we do not elaborate too much here, we are more to explain the development of Redis thread mode.

III. Interpretation of Nio threading model

Let's talk about blocking, non-blocking, and IO multiplexing. Which one does Redis use?

Redis uses the IO multiplexing mode, so we focus on understanding the multiplexing mode, how to better land into our system, inevitably we have to talk about the Reactor mode.

First of all, let's explain the relevant nouns.

Reactor: similar to Selector in NIO programming, responsible for the dispatch of Icano events

After the event is received in the Acceptor:NIO, the branch logic that handles the connection

Handler: operation classes such as message read and write processing.

3.1 single Reactor single thread model

Processing flow

Reactor listens for connection events and Socket events. When a connection event comes, it is handed over to Acceptor, and when there is a Socket event, it is handed over to the corresponding Handler.

Advantages

The model is relatively simple, and all the processes are in one connection.

It is relatively easy to implement, and the module function is also relatively decoupled. Reactor is responsible for multiplexing and event distribution processing, Acceptor is responsible for connection event processing, and Handler is responsible for Scoket read and write event processing.

Shortcoming

There is only one thread, and the connection processing and business processing share one thread, which can not make full use of the multi-core advantage of CPU.

The system can perform well when the traffic is not particularly large and the business processing is relatively fast. When the traffic is relatively large and the read and write events are time-consuming, it is easy to lead to the performance bottleneck of the system.

How to solve the above problems? Since the business processing logic may affect the system bottleneck, can we take the business processing logic out and hand it over to the thread pool to reduce the impact on the main thread on the one hand and take advantage of the multi-core advantages of CPU on the other hand. I hope you will understand this point thoroughly, so that we can understand the design idea of Redis from single-thread model to multi-thread model.

3.2 single Reactor multithreading model

Compared with the single Reactor single thread model, this model just hands over the processing logic of the business logic to a thread pool.

Processing flow

Reactor listens for connection events and Socket events. When a connection event comes, it is handed over to Acceptor, and when there is a Socket event, it is handed over to the corresponding Handler.

After the Handler completes the read event, it is packaged as a task object and handed over to the thread pool to handle, and the business processing logic is handed over to other threads to handle.

Advantages

Let the main thread focus on the handling of common events (connect, read, write), further decoupling from the design

Take advantage of the multicore advantages of CPU.

Shortcoming

It seems that this model is perfect. Let's think again. If there are many clients and the traffic is particularly heavy, the handling of general events (read and write) may also become the bottleneck of the main thread, because every read and write operation involves system calls.

Is there any good way to solve the above problems? Through the above analysis, have you found a phenomenon that when a certain point becomes a bottleneck in the system, try to find a way to take it out and hand it over to another thread to deal with it? is this scenario applicable?

3.3MultiReactor multithreading model

Compared with the single Reactor multithreading model, this model only takes the read and write processing of the Scoket out of the mainReactor and gives it to the subReactor thread to process.

Processing flow

The mainReactor main thread is responsible for listening and handling connection events. When the Acceptor finishes processing the connection process, the main thread assigns the connection to subReactor.

SubReactor is responsible for the monitoring and processing of the Socket assigned by mainReactor. When a Socket event comes, it is handed over to the corresponding Handler.

After the Handler completes the read event, it is packaged as a task object and handed over to the thread pool to handle, and the business processing logic is handed over to other threads to handle.

Advantages

Let the main thread focus on the handling of connection events, and the child threads focus on reading and writing events blowing, further decoupling from the design.

Take advantage of the multicore advantages of CPU.

Shortcoming

The implementation will be more complex and can be considered in scenarios where stand-alone performance is extremely pursued.

IV. Overview of threading model 4.1 of Redis

We talked about the history of the IO network model and the reactor mode of IO multiplexing. What kind of reactor mode does Redis use? Before we answer this question, let's sort out a few conceptual questions.

There are two types of events in the Redis server, file events and time events.

File events: here files can be understood as Socket-related events, such as connecting, reading, writing, etc.

Time time: can be understood as scheduled task events, such as some regular RDB persistence operations.

This article focuses on events related to Socket.

4.2 Model diagram

First, let's take a look at the threading model diagram of Redis services.

IO multiplexing is responsible for monitoring each event (connection, reading, writing, etc.). When an event occurs, the corresponding event is put into the queue and distributed by the event dispatcher according to the event type.

If it is a connection event, it is distributed to the connection reply processor; redis commands such as GET, SET and so on are distributed to the command request processor.

After the command processing, the command reply event is generated, and then from the event queue, to the event dispatcher, to the command reply processor, and to the client response.

4.3 an interactive process between client and server

4.3.1 connection proc

Connection process

The main thread on the Redis server listens on the fixed port and binds the connection event to the reply processor.

After the client initiates the connection, the connection event is triggered, and the IO multiplexer queues the humiliating event after the connection event is packaged, and then the event distribution processor distributes it to the connection response processor.

The connection reply processor creates the client object and the Socket object. Here we focus on the Socket object and generate ae_readable events, which are associated with the command processor to indicate that the subsequent Socket is interested in readable events, that is, we begin to receive commands from the client.

The current process is handled by a main thread.

4.3.2 Command execution process

SET command execution process

The client initiates the SET command, and after the IO multiplexer listens for the event (read the event), it wraps the data as an event and throws it into the event queue (the event is bound to the command request processor in the previous process)

The event distribution processor distributes the event to the corresponding command request processor according to the event type.

The command requests the processor, reads the data in the Socket, executes the command, then generates an ae_writable event, and binds the command to reply to the processor

After the IO multiplexer listens for the write event, it wraps the data as an event and throws it into the event queue, and the event distribution processor distributes it to the command reply processor according to the event type.

The command replies to the processor and writes the data back to the client in Socket.

4.4 advantages and disadvantages of the model

From the above process analysis, we can see that Redis uses a single-threaded Reactor model, and we also analyze the advantages and disadvantages of this model, so why does Redis adopt this model?

The characteristics of Redis itself

Command execution is based on memory operations, and the business processing logic is relatively fast, so command processing this single thread can also maintain a high performance.

Advantages

For the advantages of the Reactor single-threaded model, see above.

Shortcoming

The shortcomings of the Reactor single-threaded model are also reflected in Redis, the only difference is that the business logic processing (command execution) is not the system bottleneck.

As traffic increases, the time-consuming of IO operations becomes more and more obvious (read operations, reading data to the application in the kernel. Write operation, data in the application to the kernel), when a certain threshold is reached, the bottleneck of the system is reflected.

How does Redis solve it?

Haha ~ take the time-consuming points out of the main line? Is that what the new version of Redis does? Let's take a look.

4.5 Redis multithreading mode

The multithreading model of Redis is a little different from the "multi-Reactor multithreading model" and "single Reactor multithreading model", but the ideas of the two Reactor models are used at the same time, as follows

The multithreading model of Redis is to make IO operation multi-threaded, and its logic processing process (command execution process) is still single-threaded. With the help of the idea of single Reactor, there is some distinction in implementation.

The multi-threading of IO operation, which is consistent with the idea of multi-Reactor derived from a single Reactor, is to pull the IO operation out of the main thread.

General process of command execution

The client sends the request command and triggers the read ready event. The main thread of the server puts the Socket (to simplify the cost of understanding, the connection is represented by Socket) into a queue, and the main thread is not responsible for reading.

The IO thread reads the client request command through Socket, the main thread is busy polling, and waits for all the Imax O threads to complete the read task. The IO thread is only responsible for reading but not for executing the command.

The main thread executes all the commands at once, the execution process is the same as the single thread, and then the connection that needs to be returned is put into another queue, and the IO thread is responsible for writing out (the main thread can also write).

The main thread is busy polling, waiting for all the Igamot O threads to complete the write task.

At this point, the study of "what is the principle of the Redis threading model" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report