In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
First, the underlying data structure 1, simple dynamic string (simple dynamic string,SDS)
In the Redis database, containing string value pairs is implemented by SDS at the bottom.
Such as:
127.0.0.1 purl 6379 > set msg hello
The key msg is a string object whose underlying implementation is a SDS with a value of "msg".
The value is also a string object, and its underlying implementation is a SDS with a value of "hello".
SDS structure definition:
SDS follows the C string to end with'\ 0', the extra space of one byte saved is not included in the len attribute of SDS, and the operations such as adding empty string to the end are done automatically by SDS. The advantage of following the end of empty string is that you can reuse the functions in the C string function library.
Advantages of SDS over C strings:
Constant complexity gets the string length:
Because SDS saves the string length in the len property, there is no need to traverse the string to calculate the length.
Eliminate buffer overflows:
Since the C string does not record its own length, a buffer overflow occurs when concatenating a string whose length is greater than the remaining space of the current character array.
Before SDS modifies, it will determine whether the remaining space can hold the modified string. If not, it will expand the capacity first. The expansion rule is as follows:
-- if the modified SDS length, that is, the value of the len attribute is less than 1MB, the program will allocate the unused space of the same size as the len attribute, that is, the value of the attribute free is the same as len. In this case, the actual length of the buf array is: len + free + 1, and the extra 1 is used to hold the empty characters at the end.
If the modified SDS length is greater than or equal to 1MB, the program will allocate additional unused space for 1MB.
Binary security:
The C character ends with an empty character, and if the string contains an empty character, it will be mistaken for the end of the string, which limits that the C string can only save text data, not binary data such as pictures, videos, etc.
SDS handles the data in the buf array in a binary way, without any restrictions, just as the data is written in and read out.
2, linked list
Linked list is widely used in Redis. For example, one of the underlying implementations of list is linked list.
Linked list and linked list node structure definition:
Linked list:
Node:
A linked list of three nodes:
Redis linked list features:
Double-ended: linked list node to prev and next pointer
Acyclic: both the header node prev and the footer node next point to NULL
With header and footer pointers: the time complexity of obtaining header and footer nodes through list head and tail pointers is O (1).
Length counter: list attribute len records the number of nodes
Polymorphism: nodes use void* pointers to save node values, and you can set specific functions for node values through list's dup,free,match, so linked lists can hold different types of values
3, dictionary
Save key-value pairs. Keys cannot be repeated, similar to HashMap in Java.
Dictionary structure definition
Dictionary:
The ht [2] user saves the hash table, one for saving data, and one for rehash. The hash table structure is as follows:
This is very similar to the HashMap in Java. It is an array that holds the Entry. When the hash conflicts, it uses the chain finger method. Its dictEntry structure is:
This is also very similar to the inner class Node in HashMap
Come to a completed dictionary structure diagram:
Rehash
With the continuous operation, the key-value pairs of the hash table will gradually increase or decrease. In order to keep the load factor of the hash table within a reasonable range, when the hash table stores too many or too few key-value pairs, the program will expand or shrink the hash table. The purpose of expansion is to reduce hash conflicts, to prevent long linked lists from causing low query efficiency, and to shrink to save memory.
Where the load factor is defined as:
Load_factor = ht [0] .used / ht [0] .size
For a hash table with an initial size of 4 and four key-value pairs:
Load_factor = 4 / 4 = 1
The hash table is expanded when one of the following two conditions is met:
1) the server did not execute the BGSAVE or BGREWRITEAOF command, and the load factor is greater than or equal to 1
2) the server is executing BGSAVE or BGREWRITEAOF commands with a load factor greater than or equal to 5
When the load factor is less than 0.1, the program automatically starts to shrink the hash table.
Progressive rehash
When rehash, you need to move all the key-value pairs in ht [0] to ht [1]. If the amount of data in ht [0] is very large, then if you rehash all these keys to ht [1] at once, the huge amount of calculation may cause the server to stop service for a period of time. Therefore, it will gradually rehash the data in ht [0] to ht [1] several times.
Progressive rehash steps:
1) allocate space for ht [1].
2) set the rehashidx in the dictionary to 0, which means that rehash starts.
3) during rehash, every time the dictionary is added, deleted or changed, all the data of the ht [0] hash table on the rehashidx index will be rehash to the ht [1]. When the data rehash on the rehashidx index is completed, the program will increase the value of the rehashidx by 1. Because the dictionary will use both ht [0] and ht [1] at this time, delete, search, update and other operations will be carried out on the two hash tables, first look in ht [0], if not found, then look in ht [1], where the add operation will be saved directly to ht [1].
4) after all the data rehash is completed, set the rehashidx value to-1, which means that the rehash operation is complete.
4, jump table
Jump table is an ordered data structure, the average time complexity of searching is O (logN), and the worst is O (N). Nodes can be batch processed by sequential operation. Redis uses a jump table as one of the underlying implementations of ordered collections.
Jump table structure definition
Jump table:
Jump table node:
Example diagram:
Level represents the number of nodes with the largest number of layers in the jump table, and length represents the number of nodes in the jump table. For example, the scores of the above three nodes are 1.0,2.0,3.0 respectively.
Layer:
The level data of a jump table node can contain multiple elements, each of which contains a pointer to other nodes, which can be used to speed up access to other nodes. In general, the more layers, the faster the access.
Each time a jump table node is created, the program randomly generates a value between 1 and 32 as the size of the level array, which is the height.
Forward pointer:
Each layer has a forward pointer to the footer to access the node from the header to the footer.
Span:
The span of the layer, that is, the level [I] .span is used to record the distance between the two nodes, and the span is used to calculate the rank. In the process of finding a node, the spans of all the layers visited along the way are summed up, and the result is the ranking of the target node in the jump table.
Back pointer:
The backward pointer backward of the node, which is used to access the node from the footer to the header.
Score and membership:
The score is a floating-point number of type double, and all nodes in the jump table are sorted by the size of the score, or by members when the score is the same.
The member object of a node is a pointer to a string object saved using SDS. Member objects cannot be repeated in the same jump table. The score can be repeated, and when the score is the same, it is sorted by the member object dictionary order.
Integer set
A collection of integers is one of the underlying implementations of the collection, which is used as the underlying implementation when a collection contains only integer-valued elements and the number of elements is small.
Structure definition
The contents array holds the collection elements, sorted from smallest to largest, and cannot be repeated.
Although the contents attribute is declared as int8_t, it does not hold any int8_t values. The true type of the contents array depends on the value of encoding. Encoding values can be: INTSET_ENC_INT16, INTSET_ENC_INT32, INTSET_ENC_INT64, where 16, 32, and 64 represent the number of digits occupied by each integer.
Upgrade
When the newly added integer type is longer than the type of all existing elements, the integer collection is upgraded, upgrading all element types to the length of the newly added element.
The set of integers does not support demotion, and once it has been upgraded, it cannot be lowered.
6, compress the list
A compressed list is one of the underlying implementations of lists and hash keys. When a list contains only a small number of elements, and each element is a small integer or short string, it uses the compressed list as the underlying implementation.
Compressed list structure definition
Compressed list:
Zlbytes: the entire compressed list occupies the number of bytes of memory.
Zltail: compress the number of bytes from the start address of the node at the end of the list, and the address of the last node can be calculated by using the compressed list start address pointer p + zltail.
Zlen: the compressed list contains the number of nodes, and when this value is less than UINT16_MAX (65535), this value is the number of nodes; when this value is equal to UINT16_MAX, you need to traverse the compressed list to calculate.
Entry: list node.
Zlend: marks the end of the compressed list.
Compressed list node:
Each compressed list can hold an array of bytes or an integer value.
Previous_entry_length: records the length of the previous node. Depending on the length of the record, its memory footprint can be 1 or 5 bytes, in bytes. The address of the previous node can be calculated by subtracting this value from the address value of the current node, and the entire compressed list can be traversed from back to front combined with the last node address value calculated by zltail.
Encoding: records the type and length of data held by the node's content attribute.
Content: saves node values, which can be byte arrays or integers.
Chain renewal
As mentioned earlier, previous_entry_length can occupy 1 byte or 5 bytes depending on the length of the previous node, 1 byte when the current node length is less than 254 bytes, and 5 bytes when the previous node length is greater than or equal to 254 bytes.
Now consider this situation:
Suppose several nodes are saved in the compressed list, all of which are between 250 and 253 bytes in length, as shown in the figure:
Now we set a new node longer than 254 bytes as the header node of the compressed list:
Since the previous_entry_length before E1 is 1 byte, which is not enough to save the length of new nodes greater than 254, it will expand to 5 bytes, so that its own length is greater than or equal to 254.Therefore, E2 will have to expand with it. So until the last node.
Second, object
We introduced all the major data structures used by Redis earlier, but instead of directly using these data structures to implement the key-value database, Redis created an object system based on these data structures, which consists of strings, lists, hashes, collections, and ordered collections.
Redis uses objects to represent keys and values in the database. Whenever we create a key-value pair in Redis, we will create at least two objects, a key object and a value object; the key is always a string object, and the value can be a string object, a list object, a hash object, a collection object or an ordered collection object.
Every object in Redis is represented by a redisObject structure:
1, type and coding
Type: record object type, which can be any of the following images
The TYPE command returns the type of value object corresponding to the database key:
127.0.0.1 integer 6379 > set msg helloOK127.0.0.1:6379 > rpush list hello world (integer) 2127.0.1 type msgstring127.0.0.1:6379 > type listlist
Type output for different types of values:
Encoding: record what data structure the object uses as the underlying implementation of the object, which can be any of the following
The encoding that can be used by each object:
You can use the OBJECT ENCODING command to view the encoding of the value object of a database key:
127.0.1 6379 > object encoding msg "embstr" 2, object
1, string object
The encoding of a string object can be int,raw,embstr.
Int: an integer value is saved and can be represented as a long type.
Embstr: the string value is saved and the length is less than or equal to 32 bytes, saved using SDS.
Raw: save the string value with a length greater than 32 bytes, using SDS.
The difference between embstr and raw:
Raw calls the memory allocation function twice to create the redisObject structure and the sdshdr structure, while embstr only calls the content allocation function once to allocate a contiguous piece of space, which in turn contains the redisObject structure and the sdshdr structure.
Transcoding:
The string encoded by int is modified to be no longer an integer, and any modification command executed by the string encoded by embstr is converted to raw.
2, list object
The encoding of the list object can be ziplist or linkedlist.
Ziplist: the list object holds all string elements less than 64 bytes long and fewer than 512 elements.
Linkedlist: when any of the constraints in ziplist are not met.
64 and 512 can be modified through list-max-ziplist-value and list-max-ziplist-entries in the configuration file.
Structure diagram:
Add: the end × × style of the object in which the string "three" is saved in the above two figures is:
3, hash object
The encoding of the hash object can be ziplist or hashtable.
Ziplist: the string length of all key-value pairs saved by the hash object is less than 64 bytes and the number of key-value pairs is less than 512.
Hashtable: when any of the constraints in ziplist are not met.
64 and 512 can be modified through list-max-ziplist-value and list-max-ziplist-entries in the configuration file.
127.0.0.1 6379 > hmset profile name Tom age 25 career Programmer
. Structure diagram:
4, collection object
The encoding of the collection object can be intset or hashtable.
Intset: all saved elements are integer values and the number of elements does not exceed 512.
Hashtable: when any of the constraints in intset are not met.
512 can be modified through set-max-intset-entries in the configuration file.
127.0.0.1 purl 6379 > sadd Dfruits apple banana cherry
Structure diagram:
5. Ordered collection of objects
The encoding of an ordered set can be ziplist or skiplist.
Ziplist: all saved element members are less than 64 bytes long and less than 128 elements.
Skiplist: when any of the constraints in ziplist are not met.
128and 64 can be modified by zset-max-ziplist-entries and zset-max-ziplist-value in the bronze profile.
127.0.0.1 banana 6379 > zadd price 8.5 apple 5.0 banana 6.0 cherry
Structure diagram:
Use ziplist:
Use skiplist:
Where zsl in the zset structure is the pointer to the jump table and dict is the pointer to the dictionary.
The zsl jump table saves all the collection elements according to the score from small to large, each jump table node is a collection element, the object attribute of the node stores the element members, and the score attribute saves the score. Through the jump table, the program can quickly operate the scope of the collection, such as zrank, zrange commands are based on the jump table.
The dict dictionary creates a mapping from members to scores for ordered collections, where each key-value pair saves an element, the key saves the element member, and the value saves the score. Through this dictionary, the program can look up the score of a given member with a time complexity of one O (1). For example, the zscore command is based on this feature.
Zsl and dict share members and scores of the same element through pointers.
Third, persistence
Redis provides two different persistence methods. One is called a snapshot, which can write all the data that exists at a certain time to the hard disk. The other is called append-only files, which will copy the executed commands to the hard disk when the write command is executed.
1. Snapshot persistence
Redis can create snapshots to get a copy of the data stored in memory at some point in time. After you create a snapshot, you can back up the snapshot, copy it to another server to create a copy of the server with the same data, or leave it locally for use when restarting the server.
According to the configuration, the snapshot will be written to the file specified by dbfilename and stored on the path specified by dir.
If any of the Redis, system, or hardware crashes before a new snapshot is created, Redis will lose the data since the last snapshot was created.
How to create a snapshot:
The client sends a BGSAVE command to create a snapshot. For platforms that support the BGSAVE command, Redis calls fork to create a child process, which is then responsible for writing the snapshot to the hard disk, while the parent process continues to process the command request.
The client sends a SAVE command to create a snapshot. The Redis server that receives the SAVE command does not respond to any other commands until the snapshot is created. The SAVE command is not commonly used and is usually used only when there is not enough memory to execute BGSAVE.
If save is configured in the configuration file, such as save 60 10000, Redis automatically triggers the BGSAVE command when there are 10000 writes within 60 seconds, starting from the last time Redis created a snapshot. If you set multiple save options, Redis triggers BGSAVE when any one of them is satisfied.
When Redis receives a request to shut down the server through the SHUTDOWN command, or when it receives a standard TERM signal, it executes the SAVE command and shuts down the server after the SAVE command is completed.
When a Redis server receives a SYNC command from another Redis server, if the current Redis server has not executed the BGSAVE command or has not just finished executing the BGSAVE command, the current server will execute the BGSAVE command.
2Perfect AOF persistence
AOF persistence writes the executed write commands to the end of the AOF file, so Redis only needs to execute all the write commands contained in the AOF file once to recover the dataset.
AOF can be opened by setting the appendonly yes option in the configuration file.
Configure synchronization frequency:
Appendfsync always: every write command is written to the hard disk synchronously, which can seriously slow down the speed of Redis.
Appendfsync everysec: synchronization is performed once per second, and multiple commands are explicitly written to the hard disk.
Appendfsync no: let the operating system decide when to synchronize.
Problems with AOF:
As Redis continues to run, the size of AOF files will continue to grow and may even use up all the space on the hard drive. Another problem is that if the AOF file is too large, it will be very time-consuming to execute all write commands when Redis is restarted.
AOF rewrite:
The user can send a BGREWRITEAOF command to rewrite the AOF file, and when Redis receives this command, it will fork a child process to rewrite the AOF file to make it smaller.
Configure the auto-aof-rewrite-percentage and auto-aof-rewrite-min-size options in the configuration file to automatically execute BGREWRITEAOF. For example, if auto-aof-rewrite-percentage = 100 and auto-aof-rewrite-min-size = 64mb, and AOF persistence is enabled, Redis executes BGREWRITEAOF when the volume of the AOF file is larger than 64mb and the volume of the AOF file is twice as large as that after the last rewrite, that is, 100%.
Fourth, multi-computer database
1, master-slave mode
In a relational database, one master server is usually used to send updates to multiple slave servers, and slave servers are used to handle all degree requests. Redis uses the same approach to implement its own replication features.
Users can ask one server to replicate another server by executing the SLAVEOF command or setting the slaveof option. We call the replicated server the master server, while the one that replicates the master server is called the slave server.
If you use:
127.0.0.1 slaveof 6379 > 127.0.0.1 6380
Then server 127.0.0.1 6379 will become the slave server of server 127.0.0.1VR 6380, and 127.0.0.1 VR 6380 will be the master server.
1.1 implementation of replication function
The replication function of Redis is divided into two operations: synchronization (sync/psync) and command propagation:
Synchronization: the synchronization operation is used to update the database library status of the slave server to the current state of the master server. Where sync is from the old version, and psync is used to replace the sync command since version 2.8.
Command propagation: after the synchronization is completed, the master server sends its own write commands to the slave server for execution.
Boss Ben copy process:
The main difference between the old and new versions is the way they are handled when they are disconnected and reconnected. In the old version, the sync command needs to be re-sent after being disconnected and reconnected. After receiving the command, the master server will re-execute BGSAVE to create a complete RDB file. As shown in the figure above, during the disconnection process, the master server only adds three more pieces of data but requires the master server to re-execute BGSAVE, which is not cost-effective.
New version replication process:
The psync command has two modes: full synchronization and partial resynchronization:
Full synchronization: for initial replication, the steps are basically the same as sync, by asking the master server to create and send RDB files, and sending write commands saved in the buffer to the slave server.
Partial resynchronization: used for repetition after disconnection. When disconnected and reconnected, if conditions permit, the master server only needs to send the write command after disconnection to the slave server, without the need for the master server to create a complete RDB file.
1.2 implementation of partial resynchronization
Partial resynchronization consists of the following three parts:
Replication offset of the master server and replication offset of the slave server.
The replication backlog buffer for the primary server.
The running ID of the server.
1.2.1 replication offset
Both parties performing the replication maintain a replication offset.
Each time the master server propagates N bytes of data to the slave server, it adds N to its own replication offset.
Each time the slave server receives N bytes of data transmitted from the master server, add N to its own replication offset.
By comparing the replication offset of the master-slave server, you can know whether the master-slave server is in a consistent state.
1.2.2 copy backlog buffer
The replication backlog buffer is a fixed-length first-in-first-out queue maintained by the primary server and defaults to 1MB. When the master server propagates commands, it not only sends commands to all slave servers, but also queues commands into the replication backlog buffer, which records the corresponding replication offset for each byte in the queue.
When the slave server reconnects to the master server, it sends its own replication offset offset to the master server, and the master server decides what synchronization operation to perform on the slave server based on this offset:
If the data after the offset offset still exists in the copy backlog buffer, then a partial resynchronization operation is performed.
If the data after the offset offset no longer exists, a full resynchronization operation is performed for a long time.
1.2.3 the server is running ID
Each Redis server has its own running ID, which is automatically generated when the server starts, and consists of 40 random hexadecimal characters.
When the slave server replicates the master server for the first time, the master server sends its own ID to the slave server, and the slave server saves it. When the slave server is disconnected and reconnected, the saved master server ID is sent to the master server.
If the ID sent from the slave server is the same as the master server's own ID, then the master server replicated before the slave server is disconnected is the current master server, and the master server can continue to try partial resynchronization operations.
If the ID sent from the slave server is different from the master server's own ID, the master server performs a complete resynchronization operation on the slave server.
1.3 Master-slave chain
When replication needs to be done over the Internet or between different data centers, too many slave servers may make the network unavailable. The master-slave server of Redis is not particularly different, so the slave server can also have its own slave server, like this:
2, Sentinel mode
Sentinel is a highly available solution for Redis. A Sentinel system composed of one or more Sentinel instances can monitor any number of master servers and all the slave servers of these master servers. When the master server goes offline, it automatically upgrades a slave server under the offline master server to a new master server.
Sentinel starts to run the main process:
1) obtain the monitored master server information according to the loaded Sentinel configuration file
2) create a network connection to the primary server: for each monitored primary server, Sentinel creates two network connections to the primary server
-A command connection dedicated to sending commands to the primary server and receiving replies.
-one is a subscription connection, which is specifically used to subscribe to the sentinel:hello channel of the master server.
By default, Sentinel sends INFO commands to the monitored master server through a command connection every ten seconds, and obtains the current information of the master server through the command reply, including the ID of the master server itself, the server role and the slave server information under it.
3) create a command connection and subscription connection to the slave server based on the information obtained from the slave server.
Sentinel sends INFO commands to the slave server every ten seconds, and obtains the following information from the reply: including the IP and port of the master server of the slave server, the connection status of the master server, and the ID, role, priority and replication offset of the slave server, and updates the slave server information stored in the Sentinel according to this information.
4) send commands in the following format through a command connection to the sentinel:hello channels of all monitored master and slave servers at a frequency of every two seconds:
Where it starts with the information of Sentinel itself; what starts with m _ is the information of the master server, if it is sent to the master server, then it is the information of the master server, and if the purpose is the slave server, then it is the information of the master server where the slave server is located.
5) receive the channel information from the autonomous server and the slave server, and subscribe to the sentinel:hello channel of the master and slave server respectively in the three steps above, and send information to the channel in step 4, that is, each Sentinel can not only send information to the channel but also receive the information of the channel, and can obtain the information of other Sentinel monitoring the same master server from the received information.
6) according to the information of other Sentinel obtained above, establish command connection with other Sentinel, and finally the Sentinel monitoring the same master server will form an interconnected network.
7) detect the subjective offline status. By default, Sentinel sends PING commands to all instances connected to its creation commands (including master-slave servers and other Sentinel) at a frequency of once per second, and determines whether the instance is online by the reply of the instance.
The down-after-milliseconds in the Sentinel configuration file specifies the length of time for Sentinel to judge that the instance has entered the subjective offline. If an instance continuously returns an invalid reply to Sentinel within down-after-milliseconds milliseconds, then Sentinel determines that the instance is offline.
8) when Sentinel subjectively takes a primary server offline, it asks other Sentinel that also monitors the primary server. When a sufficient number of offline judgments are received from other Sentinel, Sentinel determines that the primary server is objectively offline and fails over it.
9) when a primary server is determined to be objectively offline, the Sentinel monitoring the primary server will negotiate to select a leader Sentinel, and the lead Sentinel will perform a failover.
10) fail over, select one of all the slave servers of the offline master server to convert it to the master server; change all other slave servers to copy the new master server; set the offline master server as the slave server of the new master server, and when it comes back online, it will become the slave server of the new master server.
The new primary server is selected based on:
3, cluster
Redis cluster is a distributed database scheme provided by Redis. The cluster shares data through fragmentation and provides replication and failover functions.
1, node
Each Redis server is called a node, and a Redis cluster is usually made up of multiple nodes, through the command:
CLUSTER MEET ip port
Adds the specified node to the cluster where the current node is located.
2, slot assignment
The Redis cluster stores the key-value pairs in the database by slicing. The entire database of the cluster is divided into 16384 slots. Each key in the database belongs to one of the 16394 slots, and each node in the cluster can handle 0 or 16384 slots at most. When all 16394 slots are processed by nodes, the cluster is online, otherwise it is offline.
By sending a command to the node:
CLUSTER ADDSLOTS slot...
One or more slots can be assigned to the node.
A node will not only record the slots it processes, but also send its own slots to other nodes and record the assignment information of all slots in the cluster.
3, execute the command
When the client sends a database-related command to the node, the node that receives the command calculates which slot the database key to process belongs to, and if the slot of the key happens to be assigned to the current node, the node processes the command directly; if not, the node returns a MOVED error to the client and forwards the command to the correct node.
You can use the command:
CLUSTER KEYSLOT key
Gets which slot the specified key belongs to.
4, re-slice
The rescheduling operation of the Redis cluster can change any number of slots that have been assigned to one node to another node, and the key-value pairs to which the relevant slots belong will be moved to the target node.
5, replication and failover
The node in Redis is divided into master node and slave node, the master node is used for processing slot, and the slave node is used to copy the master node, and when the master node goes offline, it processes the request in place of the master node to become the new master node. The specific steps are as follows:
1) Select one of all slave nodes to become the new master node.
2) the new master node removes all slot assignments to the offline master node and assigns these slots to itself.
3) the new master node broadcasts a PONG message to the cluster to let the other master node know that it has taken over the offline master node and is responsible for handling the original slot of the offline master node.
Elect a new master node:
1) the configuration era of the cluster is a self-increasing counter with an initial value of 0. When a node in the cluster starts a failover operation, the era increases by 1.
2) for each configuration era, each master node responsible for the processing slot in the cluster will have a vote, and the slave node that is the first to ask for a vote from the master node will get the vote from the master node.
3) when the slave node finds that the copied master node is offline, it broadcasts a message asking all master nodes that have received the message and have the right to vote (are in charge of processing slots) to vote for themselves.
4) if the number of votes received by a slave node is more than half of the total number of voting nodes, the slave node is elected as the new master node.
5) if not enough votes are received from the node, the cluster enters a new era and is elected here until the new master node is elected.
You can use the command:
CLUSTER REPLICATE node_id
Make the node that receives the command a slave to the node specified by node_id.
Fifth, affairs
Redis realizes the transaction function through multi, exec, watch, discard and other commands. A transaction provides a mechanism to package multiple command requests and then execute multiple commands at once and sequentially, and during the execution of the transaction, the server will not interrupt the transaction and instead execute the command requests of other clients. it will execute all the commands in the transaction before processing command requests from other clients.
1, the implementation of the transaction
A transaction starts with the multi command, and the exec command commits the transaction to the server for execution.
1) the multi command can switch the client executing the command from a non-transactional state to a transactional state.
2) when a client is in a non-transactional state, the commands sent by the client will be executed by the server immediately; when in the transactional state, the server will perform different operations according to the different commands sent by the client:
If the client sends one of the four commands exec, discard, watch, and multi, the server executes the command immediately.
If another command is sent, the server puts the command in the transaction queue and returns a queued reply to the client.
3) when a transactional client sends an exec command to the server, the exec command is executed immediately, and the server traverses the client's transaction queue, executes all commands saved in the queue, and returns the execution result to the client.
2the realization of the command of moment watch
The watch command is an optimistic lock that monitors any number of database keys before the exec command is executed, and checks whether at least one of the monitored keys has been modified when the exec command is executed. If so, the server refuses to execute the transaction and returns an empty reply with failed execution to the client.
Each Redis database holds an watched_keys dictionary whose key is a database key monitored by the watch command, and the dictionary value is a linked list that records all clients monitoring the key. Through the watched_keys dictionary, the server can clearly know which database keys are being monitored and which clients are monitoring these database keys.
After executing all the modification commands to the database, the watched_keys will be checked to see if any monitored keys have been modified, and if so, the REDIS_DIRTY_CAS ID of the client will be opened to identify the transaction security and breach of the client.
When the server receives the exec command, the server decides whether to execute the transaction based on whether the client has turned on the REDIS_DIRTY_CAS identity.
Reference: "Redis Design and implementation", "Redis practice"
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.