In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Election of Redis cluster nodes:
When master dies, one of the slave in the cluster will be selected to replace the mater role
Thus the integrity of redis cluster slot is guaranteed.
If one of the mster and its slave is dead, the slot will be incomplete and the whole cluster will be dead.
Cluster node information:
192.168.2.200 6379 > cluster nodes
3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201 6379 master-0 1527145806504 5 connected 10923-16383
2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380 slave 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 0 1527145805495 5 connected
227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380 slave 098e7eb756b6047fde988ab3c0b7189e1724ecf5 0 1527145804485 4 connected
5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200 5844b4272c39456b0fdf73e384ff8c479547de47 6379 myself,master-00 3 connected 5461-10922
7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380 slave 5844b4272c39456b0fdf73e384ff8c479547de47 0 1527145802468 3 connected
098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100 098e7eb756b6047fde988ab3c0b7189e1724ecf5 6379 master-0 1527145803476 1 connected 0-5460
Create 3 new key:
192.168.2.200 6379 > set name zhangsan
OK
192.168.2.200 set age 6379 > 26
-> Redirected to slot [741] located at 192.168.2.100
OK
192.168.2.100Suzhou 6379 > set home beijing
-> Redirected to slot [10814] located at 192.168.2.200
OK
Simulate 192.168.2.100RU 6379 to hang up:
# ps-ef | grep redis
Root 19023 10 15:05? 00:00:01 redis-server 192.168.2.100:6379 [cluster]
Root 19030 10 15:05? 00:00:01 redis-server 192.168.2.100:6380 [cluster]
Root 19127 2912 0 15:13 pts/0 00:00:00 grep-color=auto redis
# kill 19023
Check the status of the cluster node: (you can see that the role of a slave in the cluster becomes master)
# redis-trib.rb check 192.168.2.200:6379
> Performing Cluster Check (using node 192.168.2.200)
M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379
Slots:5461-10922 (5462 slots) master
1 additional replica (s)
M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379
Slots:10923-16383 (5461 slots) master
1 additional replica (s)
S: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380
Slots: (0 slots) slave
Replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69
M: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380
Slots:0-5460 (5461 slots) master
0 additional replica (s)
S: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380
Slots: (0 slots) slave
Replicates 5844b4272c39456b0fdf73e384ff8c479547de47
[OK] All nodes agree about slots configuration.
> Check for open slots...
> Check slots coverage...
[OK] All 16384 slots covered.
To see if the data exists:
# redis-cli-h 192.168.2.200-p 6380-c
192.168.2.200 6380 > keys *
1) "age"
Simulate recovery of failed nodes:
# redis-server redis.conf
Check the status of the cluster node: (after the failed node is restored, will it become master or slave? )
# redis-trib.rb check 192.168.2.200:6379
> Performing Cluster Check (using node 192.168.2.200)
M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379
Slots:5461-10922 (5462 slots) master
1 additional replica (s)
M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379
Slots:10923-16383 (5461 slots) master
1 additional replica (s)
S: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380
Slots: (0 slots) slave
Replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69
M: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380
Slots:0-5460 (5461 slots) master
1 additional replica (s)
S: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380
Slots: (0 slots) slave
Replicates 5844b4272c39456b0fdf73e384ff8c479547de47
S: 098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100:6379
Slots: (0 slots) slave
Replicates 227f51028bbe827f27b4e40ed7a08fcc7d8df969
[OK] All nodes agree about slots configuration.
> Check for open slots...
> Check slots coverage...
[OK] All 16384 slots covered.
Summary: after master:192.168.2.100:6379 downtime
Slave:192.168.2.200:6380 took its place and became a master character.
When 192.168.2.100 master 6379 failed to recover, it was not restored to the role of Rol
But played the role of slave.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.