Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of shrinking Master-Slave nodes in Redis Cluster Cluster

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Most people do not understand the knowledge points of this article "Redis Cluster Cluster shrinking Master-Slave Node", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "Redis Cluster Cluster shrinking Master-Slave Node method" article.

1.Cluster cluster contraction concept

When the pressure bearing capacity of the project is too high, it is necessary to increase the nodes to increase the load. When the pressure of the project is not great, we also hope to shrink the cluster for use by other projects.

The cluster shrinking operation is the same as the cluster expansion operation, only the direction is reversed.

Successful slot migration can be achieved by executing one command during capacity expansion, and how many times a few master nodes need to be executed when shrinking. For example, if you remove the nodes to be offline, there are three master nodes, then you need to execute three times. Enter the number of slots migrated out of the slot also need to be divided by 3, and each node needs to be evenly distributed.

When shrinking, you should first fill in the number of slots allocated, and then fill in who you want to assign them to. Finally, you need to fill in where the slots are divided and how many slots are generally divided. You need to see how many slots are on the host to be offline, and then divide by the number of cluster master nodes, so that the slots assigned to each host node are the same. When you fill in who you want to assign to, fill in the ID of the first master node for the first time. Enter the ID of the second primary node for the second time, and finally enter the node ID that provides the slot, which is the ID number of the offline node.

When the cluster shrinks and expands the capacity slot, it will not affect the use of data.

The source end of cluster contraction is the master node to be offline, and the target end is the online master node (the node to which it is assigned).

We should be clear that only the primary node has slots, so the slots of the primary node need to be allocated to other master nodes. When the slots are empty, the host node can be offline.

Comparison before and after shrinking the cluster

Operation steps for cluster shrinkage:

1. Execute the reshard command to spread out the slots of the primary node that needs to be offline.

two。 Several master nodes need to execute the reshard command several times, first fill in the number of slots to be allocated, then to whom, and finally where to divide.

3. When the slot dispersion is completed and the master node to be offline does not have any data, the node is deleted from the cluster.

Cluster information

At present, there are 8 nodes in the cluster with four masters and four slaves. We need to change the cluster to three masters and three slaves, and shrink out two nodes for other programs to use.

two。 Shrink the 6390 master node from the cluster by 2.1. Calculate the number of slots to be allocated to each node

You can see that there are 4096 slots on 6390 nodes. After deleting the 6390 nodes to be offline, we still have 3 master nodes. 4096 divided by 3 gets 1365. When assigning slots, we can allocate 1365 slots to each node evenly.

2.2. Assign 1365 slots to 6380 nodes of 192.168.81.210

We need to divide the 6390 nodes of 192.168.81.240 into 1365 slots to the 6380 nodes of 192.168.81.210.

You only need to fill in the What is the receiving node ID as the 6380-node ID of 192.168.81.210, which refers to who the allocated slot is to be given.

Then source node enters the ID of 6390 nodes of 192.168.81.240, which refers to the node from which 1365 slots are allocated. After entering ID, you will be prompted to allocate slots from which node to enter. Because only 6390 of slots need to be allocated, enter done here, indicating that only this node allocates 1365 slots to other nodes.

[root@redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb reshard 192.168.81.210:6380How many slots do you want to move (from 1 to 16384)? 1365 # how many slots are allocated What is the receiving node ID? 80e256579658eb256c5b710a3f82c439665794ba # allocate slots to that node Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. From which node does Type 'done' once you entered all the source nodes IDs.Source node # 1:6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 # separate the slot Source node # 2:doneDo you want to proceed with the proposed reshard plan (yes/no)? Yes # enter yes to continue

The following is a screenshot of the process of shrinking nodes.

Data migration process.

The slot was migrated successfully.

2.3. Allocate 1365 slots to 6380 nodes of 192.168.81.220 [root@redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb reshard 192.168.81.210:6380How many slots do you want to move (from 1 to 16384)? 1365 # how many slots are allocated What is the receiving node ID? 10dc7f3f9a753140a8494adbbe5a13d0026451a1 # will The slot is allocated to that node Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. From which node does Type 'done' once you entered all the source nodes IDs.Source node # 1:6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 # separate the slot Source node # 2:doneDo you want to proceed with the proposed reshard plan (yes/no)? Yes # enter yes to continue

A screenshot of the contraction process is shown.

2.4. Allocate 1365 slots to 6380 nodes of 192.168.81.230 [root@redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb reshard 192.168.81.210:6380How many slots do you want to move (from 1 to 16384)? 1366 # how many slots What is the receiving node ID are allocated? A4381138fdc142f18881b7b6ca8ae5b0d02a3228 # allocates slots to that node Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. From which node does Type 'done' once you entered all the source nodes IDs.Source node # 1:6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 # separate the slot Source node # 2:doneDo you want to proceed with the proposed reshard plan (yes/no)? Yes # enter yes to continue

A screenshot of the contraction process is shown.

When the last node migrates the data, the number of slots on the 6390 primary node becomes 0.

2.5. View current cluster slot allocation

Slots and data have been migrated from the 6390 hosts that are about to go offline. You can take a look at the number of slots of the current three master nodes in the cluster.

As you can see very clearly, the number of slots on each master node is now 5461.

If you feel that the order after the slot redistribution is not satisfactory, then after performing the reshard, assign the slots of the other nodes to the 6380 of 192.168.81.210, so that 6380 of the slots have 0-16383 slots, and then assign one node of the slots to 5461, and the order of each node will be the same.

3. Verify whether the data migration process causes data exceptions

Open a few more windows, one to perform data slot migration, one to constantly create key, one to view the progress of key creation, and one to view the data of key.

Continuous testing, found that there is no abnormal data, all show ok.

4. Delete 4.1 from the cluster for the offline master node. Delete nod

Use redis-trib to delete a node. If the node has a replication relationship, if a node is replicating the current node or the current node replicates the data of another node, redis-trib will automatically handle the replication relationship, and then delete the node. After the node is deleted, the corresponding process will also stop running.

You must make sure that the node does not have any slots and data before deleting it, otherwise the deletion will fail.

Command:. / redis-trib.rb del-node node IP: Port ID

[root@redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb del-node 192.168.81.240 6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 > Removing node 6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 from cluster 192.168.81.240 Removing node 6bee155f136f40e28e1f60c8ddec3b158cd8f8e8 from cluster 6390 > Sending CLUSTER FORGET messages to the cluster... > SHUTDOWN the node.[ root @ redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb del-node 192.168.81.24063 f6b9320dfbc929ad5a31cdb149360b0fd8de2e60 > Removing Node f6b9320dfbc929ad5a31cdb149360b0fd8de2e60 from cluster 192.168.81.240 SHUTDOWN the node 6391 > > Sending CLUSTER FORGET messages to the cluster... >.

4.2. Adjust master-slave cross replication

After deleting the two redis nodes on the 192.168.81.240 server, 6380 on the 192.168.81.210 server has no replication relationship, and we need to copy the 6381 nodes of 192.168.81.230 to the 6380 nodes of 192.168.81.210.

[root@redis-1] # redis-cli-h 192.168.81.230-p 6381192.168.81.230CLUSTER REPLICATE 80e256579658eb256c5b710a3f82c439665794baOK 6381

4.3. Cannot delete [root@redis-1 / data/redis_cluster/redis-3.2.9/src] #. / redis-trib.rb del-node 192.168.81.220 redis-trib.rb del-node 6380 10dc7f3f9a753140a8494adbbe5a13d0026451a1 > Removing node 10dc7f3f9a753140a8494adbbe5a13d0026451a1 from cluster 192.168.81.220 Removing node 10dc7f3f9a753140a8494adbbe5a13d0026451a1 from cluster 6380 [ERR] when there is data in the node. Reshard data away and try again.

5. Clear the cluster information from offline hosts

Although redis-trib can delete nodes in the cluster, it cannot empty its cluster information. If the cluster information is still retained, then it cannot join other clusters if it is grounded.

You can delete the cluster information using cluster reset on the offline redis node.

192.168.81.240virtual 6390 > CLUSTER resetOK

The above is the content of this article on "how to shrink master-slave nodes in Redis Cluster clusters". I believe we all have some understanding. I hope the content shared by the editor will be helpful to you. If you want to know more related knowledge, please pay attention to the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report