In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
First of all, thank you for the guidance of "Barry Highway", which opened my understanding of Sentinel. Thank you again.
Sentinel of Redis (sentinel)
Sentinels are suitable for non-clustered redis environments, such as redis master-slave environments.
With regard to the Sentinel cluster, I will not do an experiment here, there are a large number of configuration methods on the Internet.
The core idea of Sentinel Cluster is to solve the problem of single point failure of Sentinel.
Personally, I think that if there is funds for a sentinel cluster, it would be better to do a redis cluster directly.
Environment description: (master-slave structure)
Master:192.168.2.100:6379 # single node sentinel runs on this environment
Slave:192.168.2.200:6379
Savle:192.168.2.201:6379
Redis installation path: / usr/local/redis-3.0.6-6379
Key configuration of redis.conf: (basic configuration is not discussed here. Reference for master-slave configuration: https://blog.51cto.com/13690439/2118890)
Master:
# vim redis.conf
Requirepass "123456" / / set the connection master password
This parameter is called when masterauth "123456" / / logs in to master
Slave:
# vim etc/redis.conf
Slaveof 192.168.2.100 6379 / / specify the main library IP and port
Requirepass "123456" / / the reason why all three redis are written in this way is that after a failover, the info replication displays slave0 information
This parameter is called when masterauth 123456 / / logs in to master
Sentinel key configuration: (sentinel.conf)
# vim sentinel.conf
Port 26379 / / Port
Sentinel monitor mymaster 192.168.2.100 6379 1
/ / master-name can customize and listen to IP:port when several sentinel consider a fault as a real fault.
Sentinel down-after-milliseconds mymaster 3000
/ / Sentinel sends PING messages and waits for PONG messages. If the time exceeds the set time, it indicates failure. Milliseconds
Sentinel failover-timeout mymaster 10000
/ / failover timeout. If the timeout exceeds the set time, the failover fails. Milliseconds
Sentinel auth-pass mymaster 123456 / / set the password required to connect to master
Start the redis of master:
# redis-server redis.conf
Activate Sentinel: (open a new xshell)
# redis-sentinel sentinel.conf
Start the redis of slave:
# redis-server redis.conf
Output of the Sentinel:
# Sentinel runid is e49df1197325687d9a40508c00f466a8c6e596db
/ / Sentinel ID
# + monitor master mymaster 192.168.2.100 6379 quorum 1
/ / + monitor: a master monitor has been added, followed by master information
* + slave slave 192.168.2.200 slave slave 6379 192.168.2.200 6379 @ mymaster 192.168.2.100 6379
/ / + salve: a salve monitor has been added, followed by salve information
* + slave slave 192.168.2.201Viru 6379 192.168.2.201 6379 @ mymaster 192.168.2.100 6379
/ / + salve: a salve monitor has been added, followed by salve information
Log in to redis on master:
# redis-cli-h 192.168.2.100-p 6379-a 123456
192.168.2.100Suzhou 6379 > set name zhangsan
OK
192.168.2.100Suzhou 6379 > set age 26
OK
192.168.2.100Suzhou 6379 > set home beijing
OK
192.168.2.100Suzhou 6379 > keys *
1) "name"
2) "home"
3) "age"
Log in to redis on salve: (two salve logins to view, data synchronization is normal)
# redis-cli-h 192.168.2.200-p 6379-a 123456
192.168.2.200 keys 6379 >
1) "name"
2) "home"
3) "age"
Stop the redis of master: (observe the output of the Sentinel)
# ps-ef | grep redis
Root 11467 1 0 17:12? 00:00:00 redis-server *: 6379
Root 11473 4650 0 17:12 pts/1 00:00:00 redis-sentinel *: 26379 [sentinel]
Root 11513 4339 0 17:16 pts/0 00:00:00 grep-color=auto redis
# kill 11467
Output of the Sentinel:
# + sdown master mymaster 192.168.2.100 6379
/ / + the sdown:master node is dead. Followed by master information
# + odown master mymaster 192.168.2.100 6379 # quorum 1
# + new-epoch 14
# + try-failover master mymaster 192.168.2.100 6379
/ / start recovery failure
# + vote-for-leader e49df1197325687d9a40508c00f466a8c6e596db 14
/ / Sentinel information of voting node. Since we have only one sentry, we are our own leader.
# + elected-leader master mymaster 192.168.2.100 6379
/ / who to replace after electing the node
# + failover-state-select-slave master mymaster 192.168.2.100 6379
/ / start the election for faulty master
# + selected-slave slave 192.168.2.200 selected-slave slave 6379 192.168.2.200 6379 @ mymaster 192.168.2.100 6379
/ / Node election result, select 192.168.2.200 master 6379 to replace
* + failover-state-send-slaveof-noone slave 192.168.2.200 failover-state-send-slaveof-noone slave 6379 192.168.2.200 6379 @ mymaster 192.168.2.100 6379
/ / confirm the result of node election
* + failover-state-wait-promotion slave 192.168.2.200 failover-state-wait-promotion slave 6379 192.168.2.200 6379 @ mymaster 192.168.2.100 6379
/ / the selected node is being upgraded to master
# + promoted-slave slave 192.168.2.200 promoted-slave slave 6379 192.168.2.200 6379 @ mymaster 192.168.2.100 6379
/ / the selected node has been successfully upgraded to master
# + failover-state-reconf-slaves master mymaster 192.168.2.100 6379
/ / switch the status of the failed master
* + slave-reconf-sent slave 192.168.2.201Viru 6379 192.168.2.201 6379 @ mymaster 192.168.2.100 6379
/ /
* + slave-reconf-inprog slave 192.168.2.201Viru 6379 192.168.2.201 6379 @ mymaster 192.168.2.100 6379
/ / synchronization failure master information of other nodes
* + slave-reconf-done slave 192.168.2.201Viru 6379 192.168.2.201 6379 @ mymaster 192.168.2.100 6379
/ / other nodes complete the synchronization of the failed master
# + failover-end master mymaster 192.168.2.100 6379
/ / failure recovery completed
# + switch-master mymaster 192.168.2.100 6379 192.168.2.200 6379
/ / master changed from 192.168.2.100 to 192.168.2.200
* + slave slave 192.168.2.201Viru 6379 192.168.2.201 6379 @ mymaster 192.168.2.200 6379
/ / other nodes specify a new master
* + slave slave 192.168.2.100 slave slave 6379 192.168.2.100 6379 @ mymaster 192.168.2.200 6379
/ / the fault master specifies a new master
# + sdown slave 192.168.2.100 sdown slave 6379 192.168.2.100 6379 @ mymaster 192.168.2.200 6379
/ / 192.168.2.100Suzhou 6379 downtime, waiting to be restored
Start the redis of master (watch the Sentinel's output)
# redis-server redis.conf
Output of the Sentinel:
#-sdown slave 192.168.2.100 sdown slave 6379 192.168.2.100 6379 @ mymaster 192.168.2.200 6379
/ / the failed master node has been restored
* + convert-to-slave slave 192.168.2.100 convert-to-slave slave 6379 192.168.2.100 6379 @ mymaster 192.168.2.200 6379
/ / the faulty master node becomes salve, and his master is 192.168.2.200 master 6379
Now let's look at the status of the master and slave:
View on 192.168.2.100:
# redis-cli-h 192.168.2.100-p 6379-a 123456
192.168.2.100Suzhou 6379 > info replication
# Replication
Role:slave
Master_host:192.168.2.200
Master_port:6379
Master_link_status:up
192.168.2.100 set ab 6379 >
(error) READONLY You can't write against a read only slave.
192.168.2.100Suzhou 6379 > get name
"zhagnsan"
View on 192.168.2.201:
# redis-cli-h 192.168.2.201-p 6379-a 123456
192.168.2.201379 > info replication
# Replication
Role:slave
Master_host:192.168.2.200
Master_port:6379
Master_link_status:up
192.168.2.20115 set ab 6379 >
(error) READONLY You can't write against a read only slave.
192.168.2.201379 > get name
"zhagnsan"
View on 192.168.2.200:
# redis-cli-h 192.168.2.200-p 6379-a 123456
192.168.2.200 6379 > info replication
# Replication
Role:master
Connected_slaves:2
Slave0:ip=192.168.2.201,port=6379,state=online,offset=24990,lag=1
Slave1:ip=192.168.2.100,port=6379,state=online,offset=24990,lag=0
192.168.2.200 set ab 6379 >
OK
192.168.2.200 keys 6379 >
1) "name"
2) "home"
3) "age"
4) "ab"
Summary: the highly available effect of master-slave environment is achieved through sentinel.
After the downtime, master automatically promotes one of the salve to master through the sentinel. During the switching process, the data is preserved intact.
However, we also found that the original master can no longer be inserted into the key, and the client connection will still connect to the original master.
At this point, we need to make some rule settings in the database connection pool.
Due to my limited ability at present, I am unable to explain the problem of connection pooling. Please forgive me.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.