In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This article is to share with you what is redis cluster configuration and management, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article. Let's take a look at it with the editor.
Redis began to support clusters after version 3.0. after continuous updating and optimization in the middle versions, the latest version of the cluster function has been very perfect. This article briefly introduces the process and configuration of building a Redis cluster. The redis version is 5.0.4 and the operating system is the winning Kirin (basically the same as the Centos kernel).
1. Redis cluster principle.
Redis cluster is an assembly that provides data sharing among multiple Redis nodes. Cluster nodes work together to build a decentralized network. Each node in the cluster has an equal identity, and each node saves its own data and cluster state. Gossip protocol is used to communicate between nodes, which ensures the synchronization of node state information.
Redis cluster data is managed by partitions, and each node holds a subset of the cluster data. Data is allocated in a way called hash slot, which is different from the traditional consistent hash. The Redis cluster has 16384 hash slots, and each key passes the CRC16 check and modulates 16384 to decide which slot to place.
In order to make the cluster available when some nodes fail or most nodes cannot communicate, the cluster uses the master-slave replication model. When reading the data, the data is obtained from the corresponding master node according to the consistent hashing algorithm. If the master dies, a corresponding salve node will be started to act as the master.
2. Environmental preparation
Here we are going to build a 3-master and 3-slave redis cluster on a PC.
Create a new folder, rediscluster, under the / opt/ directory to hold the cluster node directory.
Then create 6 new folders of server10, server11, server20, server21, server30 and server31 to prepare 6 redis nodes. These nodes use ports 6379, 6380, 6381, 6382, 6383 and 6384 respectively. Take server10 as an example:
Port 6379daemonize yespidfile / var/run/redis_6379.pidcluster-enabled yescluster-node-timeout 15000cluster-config-file nodes-6379.conf
Other nodes only need to modify the port and file name, then click here to configure, and start these nodes after the configuration is complete.
[root@localhost rediscluster] #. / server10/redis-server. / server10/redis.conf & [root@localhost rediscluster] #. / server11/redis-server. / server11/redis.conf & [root@localhost rediscluster] #. / server20/redis-server. / server20/redis.conf & [root@localhost rediscluster] #. / server21/redis-server. / server21/redis.conf & [root@localhost rediscluster] #. / server30/redis-server. / server30/redis.conf & [root@localhost rediscluster ] #. / server31/redis-server. / server31/redis.conf &
View startup status:
[root@localhost rediscluster] # ps-ef | grep redisroot 11842 10 15:03? 00:00:12. / server10/redis-server 127.0.1 server10/redis-server 6379 [cluster] root 11950 15:03? 00:00:13. / server11/redis-server 127.0.1 ef 6380 [cluster] root 12074 10 15:04? 00:00:13. / server20/redis-server 127.127. 0.0.1:6381 [cluster] root 12181 1 0 15:04? 00:00:12. / server21/redis-server 127.0.0.1:6382 [cluster] root 12297 1 0 15:04? 00:00:12. / server30/redis-server 127.0.0.1:6383 [cluster] root 12404 1 0 15:04? 00:00:12. / server31/redis-server 127.0.0.1:6384 [cluster]
3. Cluster configuration
Very simple: redis-cli-- cluster create 127.0.0.1 cluster-replicas 1
Where-replicas 1 represents 1 slave node for each master node
[root@localhost rediscluster] # / server10/redis-cli-- cluster create 127.0.0.1 cluster create 6379 127.0.0.1 cluster create 6380 127.0.0.1 cluster create 6381 127.0.1 cluster create 6382 127.0.0.1 cluster create 6383 127.0.1 Performing hash slots allocation on 1 > > Performing hash slots allocation on 6 nodes...Master [0]-> Slots 0-5460Master [1]-> Slots 5461-10922Master [2]-> Slots 10923-16383Adding Replica 127.0.0.1 Trying to optimize slaves allocation for anti-affinity 6383 to 127.0.0.1:6379Adding replica 127.0.0.1 to 127.0.0.1:6380Adding replica 127.0.1 to 127.0.1 > > Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their masterM: efa84a74525749b8ea20585074dda81b852e9c29 127.0.1 masterM: [0-5460] (5461 slots) masterM: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.16380 slots : [5461-10922] (5462 slots) masterM: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1 slots 6381 slots: [10923-16383] (5461 slots) masterS: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1 slots 6382 replicates 63e20c75984e493892265ddd2a441c81bcdc575cS: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1 replicates efa84a74525749b8ea20585074dda81b852e9c29Can I set the above configuration? (type 'yes' to accept): yes > Nodes configuration updated > Assign a different config epoch to each node > > Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.... > > Performing Cluster Check (using node 127.0.0.1 slots) slots: [0-5460] (5461 slots) masteradditional replica (s) M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.1 purl 6381 slots: [10923-16383] (5461 slots) masteradditional replica (s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 12731. 0.0.1 slave replicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1 slots 6380 slots: [5461-10922] (5462 slots) masteradditional replica (s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.1 6383 slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes agree about slots configuration. > Check for open slots... > > Check slots coverage... [OK] All 16384 slots covered.
After the creation is completed, the master and slave nodes are assigned as follows:
Adding replica 127.0.0.1:6383 to 127.0.0.1:6379Adding replica 127.0.0.1:6384 to 127.0.0.1:6380Adding replica 127.0.0.1:6382 to 127.0.0.1:6381
4. Cluster testing
After testing through the 6379 client connection, it was found that it turned to 6381:
[root@localhost rediscluster]. / server10/redis-cli-h 127.0.0.1-c-p 6379127.0.1 located at 127.0.0.1:6381OK127.0.0.1:6381 6379 > set foo bar- > Redirected to slot [12182] located at 127.0.0.1:6381OK127.0.0.1:6381 > get foo "bar"
Connect the test on 6381:
[root@localhost rediscluster] #. / server10/redis-cli-h 127.0.0.1-c-p 6381127.0.1 get foo bar 6381
The result is the same, indicating that the cluster configuration is normal.
5. Expand the capacity of cluster nodes
Under the rediscluster directory, add two new directories server40 and server41, and add two new redis nodes to configure 6385 and 6386 ports. Take 6385 as the new master node, 6386 as the slave node, and then start the node:
[root@localhost server41] # ps-ef | grep redisroot 11842 10 15:03? 00:00:18. / server10/redis-server 127.0.1 server10/redis-server 6379 [cluster] root 11950 15:03? 00:00:19. / server11/redis-server 127.0.1 ef 6380 [cluster] root 12074 10 15:04? 00:00:18. / server20/redis-server 127.127. 0.0.1:6381 [cluster] root 12181 1 0 15:04? 00:00:18. / server21/redis-server 127.0.0.1:6382 [cluster] root 12297 1 0 15:04? 00:00:17. / server30/redis-server 127.0.0.1:6383 [cluster] root 12404 1 0 15:04? 00:00:18. / server31/redis-server 127. 0.0.1:6384 [cluster] root 30563 1 0 18:01? 00:00:00. / redis-server 127.0.0.1:6385 [cluster] root 30582 1 0 18:02? 00:00:00. / redis-server 127.0.0.1:6386 [cluster]
Add a primary node:
[root@localhost server41] # / redis-cli-- cluster add-node 127.0.0.1 cluster add-node 6385 127.0.0.1 to cluster 6379 > Performing Cluster Check (using node 127.0.0.1) M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.1 masteradditional replica 6379 slots: [0-5460] (5461 slots) masteradditional replica (s) M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.1 [10923-16383] (5461 slots) masteradditional replica (s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1 slots 6382 slots: (0 slots) slave replicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1 63e20c75984e493892265ddd2a441c81bcdc575c 6384 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1 63e20c75984e493892265ddd2a441c81bcdc575c 6380 slots: [5461-10922] (5462 slots) masteradditional replica (s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1 63e20c75984e493892265ddd2a441c81bcdc575c slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes Agree about slots configuration. > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. > > Send CLUSTER MEET to node 127.0.1 All 6385 to make it join the cluster. [OK] New node added correctly.
View a list of nodes:
[root@localhost server41] #. / redis-cli 127.0.0.1 cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 6379 > cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.1 cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 6385mm 16385 myself-01555064037664 0 myself Master-0 1555064036000 1 connected 0-5460d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1 connected 638116381 master-01555064038666 3 connected 10923-163830469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1Visa 6382 "16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 01555064035000 4 connectedddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1master 6384" 16384 slave efa84a74525749b8ea20585074dda81b852e9c29 01555064037000 6 connected63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1master 6380 "16380 master-01555064037000 2 connected 5461-10922fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1Rose 63834016383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 01555064037000 5 connected
Add a slave node:
[root@localhost server41] # / redis-cli-- cluster add-node 127.0.0.1 cluster-slave-- cluster-master-id 22e8a8e97d6f7cc7d627e577a986384d4d181a4f > Adding node 127.0.0.1 to cluster 127.0.1 slots > Performing Cluster Check (using node 127.0.0.1) M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.1 slots masteradditional replica (s) M: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1 masterM: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1 d9a79ed6204e558b2fcee78ea05218b4de006acd 6381 slots: [10923-16383] (5461 slots) masteradditional replica (s) S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1 slots: (0 slots) slave replicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.1Ze6384 slots: (0 slots) slave replicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1 slots: [5461-10922] (5462 slots) masteradditional Replica (s) S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1 All 6383 slots: (0 slots) slave replicates d9a79ed6204e558b2fcee78ea05218b4de006acd [OK] All nodes agree about slots configuration. > > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. > > Send CLUSTER MEET to node 127.0.1 All 6386 to make it join the cluster.Waiting for the cluster to join > Configure node as replica of 127.0.0.1 All 6385. [OK] New node added correctly.
After the addition is successful, assign data to the new node:
[root@localhost server41] #. / redis-cli-cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? 22e8a8e97d6f7cc7d627e577a986384d4d181a4fPlease enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs.Source node # 1: all
This is the end of the addition. You can check the new slot distribution through the cluster nodes command.
127.0.0.1 cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 6379 > 127.0.0.1 connected 6385mm 16385 master-01555064706000 7 connected 0-332 5461-5794 10923-11255efa84a74525749b8ea20585074dda81b852e9c29 127.0.1 11255efa84a74525749b8ea20585074dda81b852e9c29 6379U 16379 myself Master-0 1555064707000 1 connected 333-5460d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1 master 6381' 16381 master-01555064705000 3 connected 11256-163837c24e205301b38caa1ff3cd8b270a1ceb7249a2e 127.0.15460d9a79ed6204e558b2fcee78ea05218b4de006acd 6386382 "16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 01555064707000 4 connectedddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.1 slave efa84a74525749b8ea20585074dda81b852e9c29 01555064707236 6 connected63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1 connected 5795-10922fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.1Vol 638380" 16383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 01555064707238 5 connected
6. Reduction of cluster nodes
Reduce the slave node first when reducing the node:
[root@localhost server41] # / redis-cli-- cluster del-node 127.0.0.1 cluster del-node 6386 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e > > Removing node 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e from cluster 127.0.0.1 cluster del-node 6386 > Sending CLUSTER FORGET messages to the cluster... > SHUTDOWN the node.
Then carry out the primary node slot transfer:
[root@localhost server41] #. / redis-cli-cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? Efa84a74525749b8ea20585074dda81b852e9c29 / / the node to be moved to Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs.Source node # 1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f / / the primary node to be deleted Source node # 2: done
Finally, the primary node is being reduced.
[root@localhost server41] #. / redis-cli-cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? Efa84a74525749b8ea20585074dda81b852e9c29 / / the node to be moved to Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs.Source node # 1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f / / the master node to be deleted Source node # 2: what is the configuration and management of redis clusters above done? the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.