In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
What is a Redis cluster
Redis cluster is a distributed (distributed), fault-tolerant (fault-tolerant) Redis implementation, and the functions that can be used by clusters are a subset of the functions that can be used by ordinary stand-alone Redis (subset).
There are no central nodes or proxy nodes in the Redis cluster, and one of the main design goals of the cluster is to achieve linear scalability (linear scalability).
Redis cluster sacrifices part of fault tolerance in order to ensure consistency (consistency): the system will maintain data consistency as much as possible on the premise of limited (limited) resistance to network disconnection (net split) and node failure (node failure).
Please note that this tutorial is used in versions above Redis3.0 (including 3.0)
If you plan to deploy a cluster, we recommend that you start by reading this document.
Introduction to Redis Cluster
A Redis cluster is an assembly that provides data sharing among multiple Redis nodes.
Redis clusters do not support commands to handle multiple keys, because this requires moving data between different nodes, thus not achieving the performance of Redis, which can lead to unpredictable errors under high load.
Redis clusters provide a certain degree of availability through partitioning and continue to process commands when a node is down or unreachable in the real environment. Advantages of Redis clusters:
Automatically split the data to different nodes.
Some nodes in the entire cluster can continue to process commands if they fail or are unreachable.
Data fragmentation of Redis Cluster
Redis cluster does not use consistent hash, but introduces the concept of hash slot.
The Redis cluster has 16384 hash slots, and each key passes the CRC16 check and modulates 16384 to decide which slot to place. Each node of the cluster is responsible for part of the hash slot. For example, if the current cluster has three nodes, then:
Node A contains hash slots 0 to 5500.
Node B contains hash slots 5501 to 11000.
Node C contains hash slots 11001 to 16384.
This structure is easy to add or remove nodes. For example, if I want to add a new node D, I need to get some slots from nodes A, B, C to D. If I want to remove node A, I need to move the slot in A to the B and C nodes, and then remove the A node without any slots from the cluster. Since moving hash slots from one node to another does not stop service, no matter adding, deleting or changing the number of hash slots of a node will not cause the cluster to be unavailable.
Master-Slave replication Model of Redis Cluster
In order to make the cluster still available when some nodes fail or most nodes can not communicate, the cluster uses the master-slave replication model, and each node will have a replica.
In our example, if node B fails in the absence of a replication model, the whole cluster will be unavailable because of the lack of slots in the range of 5501-11000.
However, when the cluster is created (or after a period of time), we add a slave node, A1 Magi B1, C1 for each node, then the whole cluster consists of three master nodes and three slave nodes, so that after node B fails, the cluster will elect B1 to continue to serve the new master node, and the whole cluster will not be unavailable because the slot cannot be found.
However, when both B and B 1 fail, the cluster is not available.
Redis consistency guarantee
Redis does not guarantee strong consistency of data. This means that in practice, the cluster may lose write operations under certain conditions.
The first reason is that the cluster uses asynchronous replication. Write operation process:
The client writes a command to master node B.
Master node B returns the command status to the client.
The master node copies the write operation to him and has to slave nodes B 1, B 2 and B 3.
The copy of the command by the master node occurs after the command reply is returned, because if each command request needs to wait for the copy operation to complete, then the speed at which the master node processes command requests will be greatly reduced-we have to make a tradeoff between performance and consistency. Note: Redis clusters may provide synchronous write methods in the future. Another situation where commands may be lost in a Redis cluster is when a network partition appears in the cluster and a client is isolated from a small number of instances, including at least one master node.
For example, suppose the cluster consists of six nodes A, B, C, A1, B1, and C1, in which A, B, C are master nodes, and A1, B1, and C1 are the subordinate nodes of An and Magi. There is also a client Z1 that assumes that network partitioning occurs in the cluster, then the cluster may be divided into two sides, most of which contain nodes A, C, A1, B1 and C1, while a small part of the cluster contains node B and client Z1.
Z1 can still write to master node B. if the network partition occurs for a short time, the cluster will continue to operate normally. If the partition is long enough for most parties to elect B1 as the new master, then the data written by Z1 to B will be lost.
Note that during a network split, there is a limit to the maximum time that client Z1 can send write commands to primary node B. this time limit is called node timeout (node timeout) and is an important configuration option for Redis clusters:
Build and use Redis clusters
The first thing we need to do to build a cluster is to have some Redis instances running in cluster mode. This means that the cluster is not made up of some ordinary Redis instances. The cluster mode needs to be enabled through configuration. After enabling the cluster mode, the Redis instances can use cluster-specific commands and features.
Cluster features currently supported by redis
1): node automatic discovery
2): slave- > master election, cluster fault tolerance
3): Hot resharding: online sharding
4): group management: cluster xxx
5): cluster management based on nodes-port.conf
6): ASK steering / MOVED steering mechanism.
1) redis-cluster architecture diagram
Architectural details:
(1) all redis nodes are interconnected with each other (PING-PONG mechanism), and binary protocol is used internally to optimize transmission speed and bandwidth.
(2) the fail of a node takes effect only when more than half of the nodes in the cluster detect failure.
(3) the client is directly connected to the redis node and does not need an intermediate proxy layer. The client does not need to connect to all the nodes in the cluster, just connect to any available node in the cluster
(4) redis-cluster maps all physical nodes to [0-16383] slot, and cluster maintains nodeslotvalue2) redis-cluster election: fault tolerance
The main results are as follows: (1) leading the election process is the participation of all master in the cluster. If more than half of the master nodes communicate with master nodes more than (cluster- node-timeout), the current master nodes are considered dead.
(2): when the whole cluster is unavailable (cluster_state:fail), and when the cluster is unavailable, all operations on the cluster are not available. ((error) CLUSTERDOWN The cluster is down) error received
A: if any master of the cluster is down, and the current master has no slave. When the cluster enters the fail state, it can also be understood as entering the fail state when the slot mapping [0-16383] is not completed.
B: if more than half of the master is dead, no matter whether or not any slave cluster enters the fail state.
I. Environment
Os:centos7 ip:192.168.19.132 redis:3.2.9 gem-redis:3.2.2
Second, set up clusters
1. Download redis-3.2.9.tar.gz locally
[root@zookeeper ~] # cd / usr/local/src/ [root@zookeeper src] # wget http://download.redis.io/releases/redis-3.2.9.tar.gz
2. Installation
Root@zookeeper ~] # yum-y install tcl-8.5* [root@zookeeper src] # tar zxf redis-3.2.9.tar.gz-C / usr/local/ [root@zookeeper src] # ln-s / usr/local/redis-3.2.9 / usr/local/redis [root@zookeeper src] # cd / usr/local/redis first: [root@zookeeper redis] # make MALLOC=libc & & make install [root@zookeeper redis] # make test (optional Long waiting time).\ o / All tests passed without errorscleaning up: may take some time... OKmake [1]: Leaving directory `/ usr/local/redis-3.2.9/src' after the second make:make is completed, install. The default installation path is / usr/local/bin. Here, we put the installation directory under / usr/local/redis and specify the directory using PREFIX: [root@zookeeper redis] # make & & make PREFIX=/usr/local/redis install.
Add the redis executable directory to the environment variable, and edit ~ / .bash_profile to add the redis environment variable:
[root@zookeeper] # vim ~ / .bash_profile# .bash _ profile# Get the aliases and functionsif [- f ~ / .bashrc]; then. ~ / .bashrcfi # User specific environment and startup programsPATH=$PATH:$HOME/bin:/usr/local/redis/binexport PATH [root@zookeeper ~] # source ~ / .bashrcfi
3. Create a folder
[root@zookeeper redis] # mkdir-p / data/cluster [root@zookeeper redis] # cd / data/cluster/ [root@zookeeper cluster] # mkdir 7000 7001 7002 7003 7004 7005
4. Copy and modify the configuration file
[root@zookeeper cluster] # mkdir / var/log/redis/ [root@zookeeper cluster] # cp / usr/local/redis/redis.conf 7000/redis-7000.confvim 7000/redis-7000.conf. Bind 192.168.19.132 port 7000 daemonize yes cluster-enabled yes cluster-config-file nodes-7000.conf cluster-node-timeout 15000 pidfile / var/run/redis_7000.pid logfile "/ var/log/redis/redis-7000.log" after the modification, cp this configuration file to all nodes and modify the port number and cluster-config-file entry. [root@zookeeper cluster] # cp 7000/redis-7000.conf 7001/redis-7001.conf [root@zookeeper cluster] # cp 7000/redis-7000.conf 7002/redis-7002.conf [root@zookeeper cluster] # cp 7000/redis-7000.conf 7003/redis-7003.conf [root@zookeeper cluster] # cp 7000/redis-7000.conf 7004/redis-7004.conf [root@zookeeper cluster] # cp 7000/redis-7000.conf 7005/redis-7005.conf. (please refer to redis-7000.conf for modifying the configuration file) [root@zookeeper cluster] # vim 7001/redis-7001.conf [root@zookeeper cluster] # vim 7002/redis-7002.conf [root@zookeeper cluster] # vim 7003/redis-7003.conf [root@zookeeper cluster] # vim 7004/redis-7004.conf [root@zookeeper cluster] # vim 7005/redis-7005.conf
5. Start 6 instances
[root@zookeeper cluster] # redis-server 7000/redis-7000.conf [root@zookeeper cluster] # redis-server 7001/redis-7001.conf [root@zookeeper cluster] # redis-server 7002/redis-7002.conf [root@zookeeper cluster] # redis-server 7003/redis-7003.conf [root@zookeeper cluster] # redis-server 7004/redis-7004.conf [root@zookeeper cluster] # redis-server 7005/redis-7005.conf [root@zookeeper cluster] # close command: [root@zookeeper cluster] # redis-cli-p port number shutdown
Log in with the redis-cli-c-h-p command
-c is logged in as a cluster
-h followed by host number
-p followed by the port number.
If you bind 127.0.0.1, you can omit the-h parameter. If you do not add-c, the client will not switch automatically.
For example: if the client logs in to port 7000, the set data should be stored on 7001 and an error will be reported. Please go to 7001. Adding-c startup will automatically switch to 7001 client save.
6. Check the startup status of the redis process
[root@zookeeper cluster] # ps-ef | grep redisroot 18839 1 0 22:58? 00:00:00 redis-server 192.168.19.132 grep redisroot 7000 [cluster] root 18843 10 22:58? 00:00:00 redis-server 192.168.19.132 ef 7001 [cluster] root 18847 10 22:58? 00:00:00 redis-server 192.168.19.132 grep redisroot 7002 [cluster] root 18851 1 0 22:59? 00:00:00 redis-server 192.168.19.132:7003 [cluster] root 18855 1 0 22:59? 00:00:00 redis-server 192.168.19.132:7004 [cluster] root 18859 1 0 22:59? 00:00:00 redis-server 192.168.19.132:7005 [cluster] root 18865 2891 0 22:59 pts/1 00:00:00 grep-- color=auto redis [root@zookeeper cluster] #
7. Deploy the cluster
7.1. install ruby dependency and return to the installation software directory
Root@zookeeper src] # yum install ruby rubygems-y [root@zookeeper src] # wget https://rubygems.org/downloads/redis-3.2.2.gem installation cluster management tool Redis author should be a Ruby enthusiast, who developed the Ruby client. This time the cluster management function is not embedded in the Redis code, so the author wrote another management script called redis-trib. Redis-trib relies on Ruby and RubyGems, as well as redis extensions. You can first use the which command to see if ruby and rubygems are installed, and use gem list-local to see if the redis extension is installed locally. [root@zookeeper src] # gem install-l redis-3.2.2.gem Successfully installed redis-3.2.2Parsing documentation for redis-3.2.2Installing ri documentation for redis-3.2.21 gem installed
Copy the cluster manager to / usr/local/bin/
[root@zookeeper src] # cp / usr/local/redis/src/redis-trib.rb / usr/local/bin/redis-trib
You can see that redis-trib.rb has the following features:
1. Create: create a cluster
2. Check: check the cluster
3. Info: view cluster information
4. Fix: repair the cluster
5. Reshard: migrate slot online
6. Rebalance: balance the number of slot of cluster nodes
7. Add-node: add new nodes to the cluster
8. Del-node: delete a node from the cluster
9. Set-timeout: sets the timeout for heartbeat connections between cluster nodes
10. Call: execute commands on all nodes in the cluster
11. Import: import external redis data into the cluster
8. Create a cluster
[root@zookeeper cluster] # redis-trib create-- replicas 1192.168.19.132: 7000192.168.19.132RV 7001192.168.19.132RV 7002 192.168.19.132RV 7003 192.168.19.132RV 7004 192.168.19.132RV 7004 192.168.19.132JV 7005 > Creating cluster > > Performing hash slots allocation on 7 nodes...Using 3 masters:192.168.19.132:7000192.168.19.132:7001192.168.19.132:7002Adding replica 192.168.19.132 purl 7003 To 192.168.19.132:7000Adding replica 192.168.19.132 to 192.168.19.132:7000Adding replica 7004 to 192.168.19.132:7001Adding replica 192.168.19.132 7005 to 192.168.19.132 7002m: 3546a9930ce08543731c4d49ae8609d75b0b8193 192.168.19.132 7000 slots:0-16383 (16384 slots) masterM: 1dd532b0f41b98574b6cd355fa58a2773c9da8fe 192.168.19.132 7001 slots:5461-10922 (5462 slots) masterM: 2900e315a4a01df8609eafe0f9fd2a1d779ecc69 192.168.19.132 2900e315a4a01df8609eafe0f9fd2a1d779ecc69 7002 slots:10923-16383 (5461 slots) masterS: 71c8cea8e3e9c913eb7c09bd3f95c03985938eca 192. 168.19.132:7003 replicates 3546a9930ce08543731c4d49ae8609d75b0b8193S: 046a02ea253d8912b87c13e98b28f81e6c54c0b1 192.168.19.132:7004 replicates 1dd532b0f41b98574b6cd355fa58a2773c9da8feS: 8a666ed58930673b7dfc6d005c2a937751350f77 192.168.19.132:7005 replicates 2900e315a4a01df8609eafe0f9fd2a1d779ecc69Can I set the above configuration? (type 'yes' to accept): yes > > Nodes configuration updated > Assign a different config epoch to each node > > Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join. > > Performing Cluster Check (using node 192.168.19.132 master 7000) M: 3da69162cde5884f21cec07f6f812ffbdda0cfc4 192.168.19.132 master 7000 slots:0-10922 (10923 slots) master 3 additional replica (s) M: d30be1d1232e55f3cc69d8d11e9eb9a870160ac1 192.168.19.132 master 7001 slots:10923-16383 (5461 slots) master 1 additional replica (s) S: 6bd6589a69ce37da5335ffd10b042ce0b02e3247 192.168.19 .132 slots 7004 slots: (0 slots) slave replicates d30be1d1232e55f3cc69d8d11e9eb9a870160ac1S: 12d7db519133b96bac51b79204f69eabdfe75627 192.168.19.132 12d7db519133b96bac51b79204f69eabdfe75627 7002 slots: (0 slots) slave replicates 3da69162cde5884f21cec07f6f812ffbdda0cfc4S: 8a9d6189b42bef127ab388e221d8225938c3f038 192.168.19.132 slave replicates d30be1d1232e55f3cc69d8d11e9eb9a870160ac1S 7003 slots: (0 slots) slave replicates 3da69162cde5884f21cec07f6f812ffbdda0cfc4S: 2cfb927fc17988be6fee6b5eb1249e2789a76f82 192.168.19.132 2cfb927fc17988be6fee6b5eb1249e2789a76f82 192.168.19.132 2cfb927fc17988be6fee6b5eb1249e2789a76f82 7005 slots: (0 slots) slave replicates 3da69162cde5884f21cec07f6f812ffbdda0cfc4 [OK] All nodes agree about slots configuration. > Check for open slots... > > Check slots coverage... [OK] All 16384slots covered.
Enter yes,Can I set the above configuration when we agree to the plan. (type 'yes' to accept): yes, start performing node handshake and slot allocation operations.
Finally, the output report shows that all 16384 slots are allocated and the cluster is created successfully. It should be noted here that the node address given to redis-trib.rb must be a node that does not contain any slots / data, otherwise the creation of a cluster will be refused.
The replicascas parameter specifies how many slave nodes are configured for each master node in the cluster, which is set to 1. The order of the node list is used to determine the master and slave roles, first the master node and then the slave node.
The creation process is as follows:
1. First create a ClusterNode object for each node, including connecting each node. Check whether each node is independent and the db is empty. Execute the load_info method to import node information.
2. Check whether the number of master nodes passed in is greater than or equal to 3. Only more than 3 nodes can form a cluster.
3. Calculate the number of slot to be allocated for each master, and assign slave to the master. The algorithm for allocation is roughly as follows:
First classify the nodes according to host to ensure that the master nodes can be allocated to more hosts.
Keep traversing the host list, popping a node from each host list and putting it into the interleaved array. Until all the nodes pop up.
The master node list is a list of the number of master nodes in front of the interleaved. Saved in the masters array.
Calculate the number of slot responsible for each master node, save it in the slots_per_node object, and divide the total number of slot by the number of master.
Iterate through the masters array, allocating slots_per_node slot to each master, and the last master to 16384 slot.
Next, assign slave to the master, and the allocation algorithm tries to ensure that the master and slave nodes are not on the same host. For the nodes that have been allocated the specified number of slave, and there are extra nodes, the master will also be looked for for these nodes. The allocation algorithm iterates through the masters array twice.
Iterate through the masters array for the first time and find the number of replicas slave in the remaining node list. Each slave is the first node that is different from the master node host. If there is no different node, the first node in the remaining list is directly taken out.
The second traversal is in the number of nodes divided by replicas is not an integer, then there will be more than part of the node. It traverses in the same way as the first time, except that the first time master is assigned a number of replicas slave at a time, while only one is allocated for the second traversal until all the remaining nodes are allocated.
4. Print out the allocation information and prompt the user to enter "yes" to confirm whether to create the cluster according to the printed allocation.
5. After entering "yes", the flush_nodes_config operation is performed, which executes the previous allocation result, assigns slot to master, and asks slave to copy master. For nodes that have not yet shook hands (cluster meet), the slave replication operation cannot be completed, but it does not matter. An exception in flush_nodes_config operation will be returned soon, and flush_nodes_config will be executed again after the handshake.
6. Assign epoch to each node and traverse the node, and the epoch assigned by each node is 1 larger than that of the previous node.
7. The nodes begin to shake hands with each other, and the way to shake hands is for other nodes in the node list to shake hands with the first node.
8. Then check whether each node has completed the message synchronously every second. Using the get_config_signature method of ClusterNode, the algorithm of checking is to obtain the cluster nodes information of each node, sort each node, and assemble it into node_id1:slots | node_id2:slot2 |. The string of the. If each node gets the same string, the handshake is considered successful.
9. After that, flush_nodes_config will be executed again, this time mainly to complete the slave replication operation.
10. Finally, execute check_cluster to check the cluster status comprehensively. Including checking it again in the same way as when you shook hands. Confirm that there are no migrated nodes. Make sure that all slot are assigned.
11. Complete the creation process and return [OK] All 16384 slots covered.
9. Cluster integrity check
[root@zookeeper ~] # redis-trib check 192.168.19.132 using node 7000 > Performing Cluster Check (192.168.19.132 Performing Cluster Check 7000) M: 8a628ee2e98c70a404be020cba3dfc1172a38335 192.168.19.132 Performing Cluster Check 7000 slots:0-5460 (5461 slots) master 1 additional replica (s) S: 154e2f4f3fad75a564f9fe2efcde7820284116c6 192.168.19.132 Performing Cluster Check 7003 slots: (0 slots) slave replicates 8a628ee2e98c70a404be020cba3dfc1172a38335S: f2707a3052d3dc91358b73b4786e4c8e20662a79 192.168.19.132 slots 7004 slots: (0 slots) slave replicates 0e4d1ee05b090c45ce979bc9e8ad4c027d5332efM: 0e4d1ee05b090c45ce979bc9e8ad4c027d5332ef 192.168.19 .132 additional replica 7001 slots:5461-10922 (5462 slots) master 1 additional replica (s) M: 08d3663dc9e0f5f02e2bff07640d67e406211e49 192.168.19.132 master 7002 slots:10923-16383 (5461 slots) master 1 additional replica (s) S: a44237119e6b2129e457d2f48a584b94b1b815f5 192.168.19.132 slots 7005 slots: (0 slots) slave replicates 08d3663dc9e0f5f02e2bff07640d67e406211e49 [OK] All nodes agree about slots configuration. > > Check for open slots... > > Check slots coverage... [OK] All 16384 slots covered. [root@zookeeper ~] # redis-trib check 192.168.19.132 Performing Cluster Check 7004 > Performing Cluster Check (192.168.19.132 slots 7004) S: f2707a3052d3dc91358b73b4786e4c8e20662a79 192.168.19.132 slots 7004 slots: (0 slots) slave replicates 0e4d1ee05b090c45ce979bc9e8ad4c027d5332efM: 0e4d1ee05b090c45ce979bc9e8ad4c027d5332ef 192.168.19.132 f2707a3052d3dc91358b73b4786e4c8e20662a79 7001 slots:5461-10922 (5462 slots) master 1 additional replica (s) S: 154e2f4f3fad75a564f9fe2efcde7820284116c6 192.168.19.132 154e2f4f3fad75a564f9fe2efcde7820284116c6 7003 slots: (0 slots) slave replicates 8a628ee2e98c70a404be020cba3dfc1172a38335M: 08d3663dc9e0f5f02e2bff07640d67e406211e49 192.168.19.132 7002 slots:10923-16383 ( 5461 slots) master 1 additional replica (s) M: 8a628ee2e98c70a404be020cba3dfc1172a38335 192.168.19.132 slave replicates 08d3663dc9e0f5f02e2bff07640d67e406211e49 7000 slots:0-5460 (5461 slots) master 1 additional replica (s) S: a44237119e6b2129e457d2f48a584b94b1b815f5 192.168.19.132 slave replicates 08d3663dc9e0f5f02e2bff07640d67e406211e49 7005 slots: (0 slots) slave replicates 08d3663dc9e0f5f02e2bff07640d67e406211e49 [OK] All nodes agree about slots configuration. > > Check for open slots... > Check slots coverage... [OK] All 16384slots covered.
10 testing
[root@zookeeper] # redis-cli-h 192.168.19.132-p 7000192.168.19.132CLUSTER INFOcluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_sent:414cluster_stats_messages_received:4143192.168.19.132:7000 7000 > CLUSTER INFOcluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_sent:414cluster_stats_messages_received:4143192.168.19.132:7000 > CLUSTER NODES154e2f4f3fad75a564f9fe2efcde7820284116c6 192.168.19.132 CLUSTER INFOcluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_sent:414cluster_stats_messages_received:4143192.168.19.132:7000 7003 slave 8a628ee2e98c70a404be020cba3dfc1172a38335 0 1496720263710 4 connectedf2707a3052d3dc91358b73b4786e4c8e20662a79 192.168.19.132 connectedf2707a3052d3dc91358b73b4786e4c8e20662a79 7004 slave 0e4d1ee05b090c45ce979bc9e8ad4c027d5332ef 0 1496720264715 5 connected0e4d1ee05b090c45ce979bc9e8ad4c027d5332ef 192.168.19.132 master 7001 master-0 1496720262702 2 connected 5461-1092208d3663dc9e0f5f02e2bff07640d67e406211e49 192.168.19.132 myself 7002 master-0 1496720265722 3 connected 10923-16383a44237119e6b2129e457d2f48a584b94b1b815f5 192.168.19.132005 slave 08d3663dc9e0f5f02e2bff07640d67e406211e49 0 1496720266730 6 connected8a628ee2e98c70a404be020cba3dfc1172a38335 192.168.19.132pur7000 myself Master-01 connected 0-5460
The current cluster status is OK, and the cluster is online. Cluster nodes can see the allocation relationship between nodes and slots, and there are still three buttons not in use at present. As a complete cluster, each node responsible for processing slots should have slave nodes to ensure automatic failover when it fails. In cluster mode, the role of Redis node is divided into master node and slave node. The node started for the first time and the node of the assigned slot are the master node, and the slave node is responsible for copying the master node slot information and related data.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.