In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to build a Redis cluster environment, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
1. Build Redis cluster environment
For convenience, all the nodes in the cluster environment are on the same server, a total of 6 nodes are distinguished by port numbers, and 3 master nodes + 3 slave nodes. The simple architecture of the cluster is shown in the figure:
This article is based on the latest Redis 6.0 download the latest source code compilation directly from github to get the commonly used tools redis-server, redis-cli. It is worth noting that since Redis 5.0, the cluster management software redis-trib.rb has been integrated into the redis-cli client tool (see cluster-tutorial for details).
This section describes how to build a cluster environment without the help of redis-trib.rb rapid management, but step by step according to the standard steps, which is also to familiarize yourself with the basic steps of cluster management. In the cluster scaling practice section, you will complete the cluster resharding with the help of redis-trib.rb.
The construction of a cluster can be divided into four steps:
Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community
Startup node: start the node in a cluster mode, when the node is independent.
Node handshake: connect independent nodes into a network.
Slot assignment: 16384 slots are allocated to the master node to achieve the effect of storing database key-value pairs in pieces.
Master-slave replication: specifies the master node for the slave node.
1.1 Boot Node
The initial state of each node is still the Master server, except that it starts in Cluster mode. The configuration file needs to be modified. Take the node with port number 6379 as an example, the main modifications are as follows:
# redis_6379_cluster.conf port 6379 cluster-enabled yes cluster-config-file "node-6379.conf" logfile "redis-server-6379.log" dbfilename "dump-6379.rdb" daemonize yes
The cluster-config-file parameter specifies the location of the cluster configuration file, and each node will maintain a cluster configuration file during operation. Whenever the cluster information changes (such as adding or decreasing nodes), all nodes in the cluster will update the latest information to the configuration file; when the node restarts, it will re-read the configuration file to obtain the cluster information, which can be easily rejoined to the cluster. That is, when the Redis node starts in cluster mode, it first looks for a cluster configuration file, if so, starts with the configuration in the file, and if not, initializes the configuration and saves the configuration to the file. The cluster configuration file is maintained by the Redis node and does not require manual modification.
After modifying the corresponding configuration files for the 6 nodes, you can use the redis-server redis_xxxx_cluster.conf tool to start 6 servers (xxxx represents the port number, corresponding to the corresponding configuration file). Use the ps command to view the process:
$ps-aux | grep redis... 800 0.1 0.0 49584 2444? Ssl 20:42 0:00 redis-server 127.0.0.1:6379 [cluster]... 805 0.1 0.0 49584 2440? Ssl 20:42 0:00 redis-server 127.0.0.1:6380 [cluster]... 812 0.3 0.0 49584 2436? Ssl 20:42 0:00 redis-server 127.0.0.1:6381 [cluster]... 817 0.1 0.0 49584 2432? Ssl 20:43 0:00 redis-server 127.0.0.1:6479 [cluster]... 822 0.0 0.0 49584 2380? Ssl 20:43 0:00 redis-server 127.0.0.1:6480 [cluster]... 827 0.5 0.0 49584 2380? Ssl 20:43 0:00 redis-server 127.0.0.1:6481 [cluster]
1.2 Node handshake
1.1 after each node is started, the nodes are independent of each other, and they are all in a cluster that contains only their own. Take the server with port number 6379 as an example, use CLUSTER NODES to view the nodes contained in the current cluster.
127.0.1 connected 6379 > CLUSTER NODES 37784b3605ad216fa93e976979c43def42bf763d: 6379 connected 16379 7568 8455 12706
We need to connect individual nodes to form a cluster of multiple nodes, using the CLUSTER MEET command.
The $redis-cli-p 6379-c #-c option specifies that redis-cli 127.0.0.1 redis-cli 6379 > CLUSTER MEET 127.0.0.1 6380 OK 127.0.0.1 OK 127.0.0.1 CLUSTER MEET 6379 > CLUSTER MEET 127.0.0.1 6381 OK 127.0.1 OK 6379 > CLUSTER MEET 127.0.0.1 6480 OK 127.0.1 OK 6379 > CLUSTER MEET 127.0.0.1 6381 OK 127.0.1 OK 6379 > CLUSTER MEET 127.0.0.1 OK
Check again the nodes contained in the cluster at this time:
127.0.0.1connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 6380 master-0 1603632309283 4 connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 6379 "16379 myself,master-0 1603632308000 1 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.1connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381" 16381 master-0 1603632310292 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1Fr6481" 16481 master-0 1603632309000 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1pur647900016479 master-0 1603632308000 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1Rod 648038016480 master-0 1603632321302 0 connected
It can be found that all six nodes join the cluster as master nodes, and the results returned by CLUSTER NODES are as follows:
...
Node id: consists of 40 hexadecimal strings. The node id is created only once when the cluster is initialized, and then saved to the cluster configuration file (that is, cluster-config-file mentioned earlier). Later, when the node is restarted, it will be read directly in the cluster configuration file.
Port@cport: the former is a normal port, which is used to provide services to clients; the latter is a cluster port, and the allocation method is: normal port + 10000, which is only used for communication between nodes.
A detailed explanation of the rest can be found in the official document cluster nodes.
1.3 slot assignment
The Redis cluster stores the key-value pairs of the database by sharding. The whole database is divided into 16384 slot. Each key of the database belongs to one of these 16384 slots, and each node in the cluster can handle 0 or up to 16384 slot.
Slots are the basic unit of data management and migration. When 16384 slots in the database are assigned nodes, the cluster is online (ok); if any of the slots do not have nodes assigned, the cluster is offline (fail).
Note that only the master node has the ability to handle slots, and if the slot assignment step is placed after the master-slave copy, and the slot is allocated to the slave node, the cluster will not work properly (in the offline state).
Using CLUSTER ADDSLOTS
Redis-cli-p 6379 cluster addslots {0.5000} redis-cli-p 6380 cluster addslots {500110000} redis-cli-p 6381 cluster addslots {1000116383}
The nodes in the cluster after slot assignment are as follows:
127.0.0.1 CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 6380 master-0 1603632880310 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.1 connected 6379 myself Master-0 1603632879000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381" 16381 master-01603632879000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.151081a64ddb3ccf5432c435a8cf20d45ab795dd8 6481mm 16481 master-01603632878000 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1purl 479U 16479 master-01603632880000 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1Vue 6480 "16480 master-0 1603632881317 0 connected 127.0.0.1Rose 6379 > CLUSTER INFO cluster_state:ok # is online in the cluster Status cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_ping_sent:4763 cluster_stats_messages_pong_sent:4939 cluster_stats_messages_meet_sent:5 cluster_stats_messages_sent:9707 cluster_stats_messages_ping_received:4939 cluster_stats_messages_pong_received:4768 cluster_stats_messages_received:9707
1.4 Master-slave replication
After the above steps, the cluster nodes all exist as master nodes and still cannot achieve the high availability of Redis. Only after master-slave replication is configured can the high availability feature of the cluster be truly realized.
CLUSTER REPLICATE is used to make the node in the cluster that receives commands become the slave node of the node specified by node_id and begins to copy the master node. Redis-cli-p 6479 cluster replicate 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 redis-cli-p 6480 cluster replicate c47598b25205cc88abe2e5094d5bfd9ea202335f redis-cli-p 6481 cluster replicate 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 cluster replicate 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6379 > CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1 cluster replicate c47598b25205cc88abe2e5094d5bfd9ea202335f redis-cli 6380 master-0 1603633105211 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1 cluster replicate c47598b25205cc88abe2e5094d5bfd9ea202335f redis-cli 6379mm 16379 myself Master-0 1603633105000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381mm 16381 master-0 1603633105000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.151081a64ddb3ccf5432c435a8cf20d45ab795dd8 648116481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603633107229 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1MU 6479U 16479 slave 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 01603633106221 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.1R 6480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603633104000 4 connected
Incidentally, the above steps 1.2 and 1.3 can be implemented using the redis-trib.rb tool as a whole, which can be completed directly with redis-cli after Redis 5.0.Referencing commands are as follows:
Redis-cli-- cluster create 127.0.0.1 cluster-replicas 6379 127.0.1 cluster-replicas 1
-- cluster-replicas 1 indicates that the given list of created nodes consists of master node + slave node pairs.
1.5 execute commands in the cluster
The cluster is online at this time, and commands can be sent to the nodes in the cluster through the client. The node that receives the command calculates which slot the key to be processed by the command belongs to and checks whether the slot is assigned to itself.
If the slot where the key is located happens to be assigned to the current node, this command will be executed directly.
Otherwise, the node returns a MOVED error to the client, directs the client to redirect to the correct node, and sends the previous command again.
Here, we use CLUSTER KEYSLOT to see that the slot number of the key name is 5798 (assigned to 6380 nodes), and when operating on this key, it will be redirected to the corresponding node. The operation for the key fruits is similar.
127.0.0.1 OK 6379 > CLUSTER KEYSLOT name (integer) 5798 127.0.0.1 set name huey-> Redirected to slot [5798] located at 127.0.0.1 OK 127.0.0.1 located at 6380 > 127.0.0.1 located at 6379 > get fruits-> Redirected to slot [14943] set name huey 127.0.0.1 located at 6381 "apple" 127.0.0.1
It is worth noting that when we send a command to a slave node through the client, the command is redirected to the corresponding master node.
127.0.0.1 located at 6480 > KEYS * 1) "name" 127.0.0.1 get name-> Redirected to slot [5798] get name 6380 "huey"
1.6 Cluster failover
When the master node in the cluster goes offline, all the slave nodes that copy this master node will select a node as the new master node and complete the failover. Similar to the master-slave configuration, when the original slave node comes online again, it will exist in the cluster as the slave node of the new master node.
Following the simulation of the downtime of 6379 nodes (SHUTDOWN it), you can observe that its slave node 6479 will continue to work as the new master node.
462 Oct S 26 Oct 14 Oct 08 milliseconds 12.750 * FAIL message received from c47598b25205cc88abe2e5094d5bfd9ea202335f about 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 462 Oct 26 Oct 14 V 08V 12.751 # Cluster state changed: fail 462 V 26 Oct 14 V 08V 12.829 # Start of election delayed for 595 milliseconds (rank # 0, offset 9160). 462:S 26 Oct 14:08:13.434 # Starting a failover election for epoch 6. 462:S 26 Oct 14:08:13.446 # Failover election won: I'm the new master. 462:S 26 Oct 14:08:13.447 # configEpoch set to 6 after successful failover 462:M 26 Oct 14:08:13.447 # Setting secondary replication ID to d357886e00341b57bf17e46b6d9f8cf53b7fad21, valid up to offset: 9161. New replication ID is adbf41b16075ea22b17f145186c53c4499864d5b 462 Discarding previously cached master state M 26 Oct 14 Discarding previously cached master state. 462:M 26 Oct 14:08:13.448 # Cluster state changed: ok
After the 6379 node recovers from the downtime state, it will exist as a slave node of the 6380 node.
127.0.0.1 CLUSTER NODES 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381V 16381 master-0 1603692968000 2 connected 10001-163832968000 2 connected 10001-16036929685040 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1connected 6479U 16479 master-01603692967495 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1R 6379U 16379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603692964000 1 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1R 648181 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603692967000 4 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1Rover 6480m 16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603692967000 connected
As mentioned earlier, cluster-config-file records the status of the cluster node. Open the configuration file nodes-6379.conf of node 6379, and you can see that the information shown in CLUSTER NODES is saved in the configuration file:
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381mm 16381 master-0 1603694920206 2 connected 10001-16383 c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1connected 6380 "16380 master-016036949160000 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.1" 6479V 16479 master-01603694920000 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.1Rover 6379379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603694918000 1 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1Vol 648181 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603694919000 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1Rod 6480" 16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694919200 5 connected vars currentEpoch 6 lastVoteEpoch 0 07.0.0.1
2 practice of cluster scaling
The key to cluster scalability is to re-fragment the cluster and realize the migration of slots between nodes. This section will practice slot migration by taking adding and deleting nodes in a cluster as an example.
With the help of the redis-trib.rb tool integrated in redis-cli to manage the slot, the help menu of the tool is as follows:
$redis-cli-cluster help Cluster Manager Commands: create host1:port1... HostN:portN-cluster-replicas check host:port-cluster-search-multiple-owners info host:port fix host:port-cluster-search-multiple-owners-cluster-fix-with-unreachable-masters reshard host:port-cluster-from-cluster-to-cluster-slots Cluster-yes-cluster-timeout-cluster-pipeline-cluster-replace rebalance host:port-cluster-weight-cluster-use-empty-masters-cluster-timeout-cluster-simulate-cluster-pipeline-cluster-threshold -cluster-replace add-node new_host:new_port existing_host:existing_port-- cluster-slave-- cluster-master-id del-node host:port node_id call host:port command arg arg.. Arg set-timeout host:port milliseconds import host:port-cluster-from-cluster-copy-cluster-replace backup host:port backup_directory help For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
2.1 Cluster scaling-add nodes
Consider adding two nodes to the cluster with port numbers 6382 and 6482, where node 6482 replicates to 6382.
(1) Startup nodes: follow the steps described in 6382 to start the 6382 and 6482 nodes.
(2) Node handshake: add nodes 6382 and 6482 with the help of redis-cli-- cluster add-node command.
Redis-cli-- cluster add-node 127.0.0.1 redis-cli 6382 127.0.1 redis-cli-- cluster add-node 127.0.0.1 redis-cli 6482 127.0.0.1 redis-cli-- cluster add-node 127.0.0.1 redis-cli-- cluster add-node 127.0.0.1 redis-cli-- Adding node 127.0.0.1redis-cli > > Adding node 127.0.0.1redis-cli > Performing Cluster Check (using node 127.0.0.1: 6379) S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6379 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381 slots: [10001-16383] (6383 slots) master 1 additional replica (s) M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1 master 6380 slots: [5001-10 000] (5 000 slots) master 1 additional replica (s) M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.1 master 6479 slots: [0-5 000] (5001 slots) master 1 additional replica S) S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1 slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1 slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f [OK] All nodes agree about slots configuration. > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. > [OK] New node added correctly. ```resharding 3) resharding: the cluster is re-shredded with the command redis-cli-- cluster reshard, so that the slots of each node are balanced (some slot are migrated from node 6379T 6380max 6381 to node 6382, respectively). You need to specify: * the number of slots to move: the final average of 4096 slot per primary node Therefore, the total movement of 4096 slots * the target node ID of the receiving slot: the ID of node 6382 * the source node ID of the moved slot: node 6379pact 6380,6381 ID ```shell $redis-cli-- cluster reshard 127.0.0.1 6479 > Performing Cluster Check (using node 127.0.0.16479) M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1 4c23b25bd4bcef7f4b77d8287e330ae72e738883 6479 slots: [0-5000] (5001 slots) master 1 additional replica (s) S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee .0.0.1: 6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f M: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1 706f399b248ed3a080cf1d4e43047a79331b714f 6482 slots: (0 slots) master M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1 master 6382 slots: (0 slots) master M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 6381 slots: [10001-16383] (6383 slots) master 1 additional replica (s) S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1 master 6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0 .0.1 slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 6379 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.1 slots: [5001-10000] (5000 slots) master 1 additional replica (s) [OK] All nodes agree about slots configuration. > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID?
(4) set the master-slave relationship:
Redis-cli-p 6482 cluster replicate af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1 CLUSTER NODES 32ed645a9c9d13ca68dba5a147937fb1d05922ee 6480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694930000 0 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.1 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 638116381 master-0 1603694931000 2 connected 11597-1638383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.1Ze64811481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 01603694932000 2 connected 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.1 myself Slave af81109fc29f69f9184ce9512c46df476fe693a3 0 1603694932000 8 connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1 slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 01603694932000 6 connected c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1Viru 6380 master-01603694933678 0 connected 6251-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1 connected 6479479 master-01603694932669 6 connected 1250-5000 af81109fc29f69f9184ce9512c46df476fe693a3 127.0.1R6382mm 16382 master-0 1603694933000 9 connected 0-1249 5001-6250 10001-11596
2.2 Cluster scaling-deleting nodes
Here it is considered to delete the two newly added nodes 6382 and 6482, and the slots allocated on node 6382 need to be migrated to other nodes.
(1) resharding: also with the help of the redis-cli-- cluster reshard command, all the slots on the 6382 node are transferred to the node 6479.
$redis-cli-- cluster reshard 127.0.0.1 6382 > > Performing Cluster Check (using node 127.0.1 using node 6382) M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.1 using node 6382 slots: [0-1249], [5001-6250] [10001-11596] (4096 slots) master 1 additional replica (s) M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 slots: [11597-16383] (4787 slots) master 1 additional replica (s) S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.1 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.1 slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.1 additional replica 6479 slots: [1250-5 000] (3751) Slots) master 1 additional replica (s) M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1 slots 6380 slots: [6251-10000] (3750 slots) master 1 additional replica (s) S: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1 c47598b25205cc88abe2e5094d5bfd9ea202335f 6482 slots: (0 slots) slave replicates af81109fc29f69f9184ce9512c46df476fe693a3 S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1 additional replica 6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 [OK] All nodes agree about slots configuration. > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 4c23b25bd4bcef7f4b77d8287e330ae72e738883 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node # 1: af81109fc29f69f9184ce9512c46df476fe693a3 Source node # 2: done 127.0.0.1 master 6379 > CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1 master 6380 16380 master-0 1603773540922 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1 done 6379mm 16379 myself Slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773539000 1 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1 master 6479 "16479 connected af81109fc29f69f9184ce9512c46df476fe693a3-0 1603773541000 10 connected 0-6250 10001-11596 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.1 master 6482" 16482 slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773541000 10 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1 "1680" 1680 "1680" 1603773539000 5 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1 "16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603773541931 4 connected af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1Rod 63823539000 9 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.1Rod 63813540000 connected 11597-16383
(2) delete node: use the redis-cli-- cluster del-node command to delete slave node 6482 and master node 6382 in turn.
$redis-cli-- cluster del-node 127.0.0.1 706f399b248ed3a080cf1d4e43047a79331b714f 6482 706f399b248ed3a080cf1d4e43047a79331b714f > > Removing node 706f399b248ed3a080cf1d4e43047a79331b714f from cluster 127.0.0.1 706f399b248ed3a080cf1d4e43047a79331b714f 6482 > > Sending CLUSTER FORGET messages to the cluster... > > Sending CLUSTER RESET SOFT to the deleted node. $redis-cli-- cluster del-node 127.0.0.1 af81109fc29f69f9184ce9512c46df476fe693a3 6382 af81109fc29f69f9184ce9512c46df476fe693a3 > > Removing node af81109fc29f69f9184ce9512c46df476fe693a3 from cluster 127.0.0.1 af81109fc29f69f9184ce9512c46df476fe693a3 6382 > > Sending CLUSTER FORGET messages to the cluster... > > Sending CLUSTER RESET SOFT to the deleted node. 127.0.0.1 CLUSTER NODES 47598b25205cc88abe2e5094d5bfd9ea202335f 6380 master-0 1603773679121 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.1 connected 16379 myself Slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773677000 1 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1 master 6479 "16479 master-0 1603773678000 10 connected 0-6250 10001-11596 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.1 master 6480" 16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603773680130 5 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1 "16481" 16481 "slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 01603773677099 4 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1 32ed645a9c9d13ca68dba5a147937fb1d05922ee 6381' 16381 master-0 1603773678112 2 connected 11597-16383
3 Summary
The construction of Redis cluster environment mainly includes four steps: startup node, node handshake, slot assignment and master-slave replication. Cluster scaling also involves these aspects. Managing the cluster environment with the redis-cli-- cluster command can not only increase simplicity, but also reduce the risk of operational errors.
The above content is how to build a Redis cluster environment. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.