Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Construction of Redis Master-Slave and Sentinel Cluster (2)

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Redis master and slave and Sentinel cluster construction

I. Preface

The master-slave synchronization principle of Redis is very similar to that of MySQL, and the sentry mechanism (sentinel) for solving Redis single point failure is very similar to MHA for solving MySQL single point failure. Therefore, when we learn Redis, we can transfer the knowledge of learning MySQL, which is helpful for us to quickly grasp the master-slave mechanism of Redis and the construction of Redis sentinel cluster. The Sentinel mechanism of Redis looks something like this. Sentinel is Sentinel, and its function is to keep watch. As a joke, Sentinels have three functions: monitoring, notification, and automatic failover. The Sentinel is used to monitor the master (master server) of the Redis, and once the master goes down, it immediately informs the other slave (slave server) to be ready to become master. Once it is determined that the master is really down within the self-set time, a slave will be randomly switched to become master. But there is a question here: is the power of sentinel too big? And in case it is the Sentinel himself rather than master that goes wrong during the monitoring time, there will be false positives. Therefore, we need a mechanism to restrict the rights of sentinels. This mechanism is the quorum (legal Voting Mechanism) mechanism. We can vote to determine whether the Sentinel's information is true and reliable rather than the Sentinel's own downtime. Since we need to vote, one sentinel is definitely not good. What about the two? That certainly won't work either! Suppose a sentinel is down, but master is not. Then this sentry will report master downtime messages, while another sentry server will not report master downtime messages. Then this kind of situation is very embarrassing, and we are in a dilemma. Therefore, since we want to vote and must have a correct and reliable information, the number of sentinels required must be more than two, so it is generally recommended that the number of sentinels be more than three.

Second, the experimental environment

Description: the system is CentOS7.3. Because the experiment is on a virtual machine, the three slave nodes are also configured with the function of Sentinel in order to save resources, of course, you can also separate the Sentinel into a cluster.

Third, experimental configuration

1 initialize the configuration

# node1 / etc/hosts file modification (other nodes make similar modifications to ensure that the machine can communicate based on the hostname)

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 node1:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6 node1 192.168.0.51 node1 192.168.0.52 node2 192.168.0.53 node3 192.168.0.54 node4

# modify hostname (effective immediately)

Hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3 hostnamectl set-hostname node4

# View hostname

[root@localhost ~] # hostnamectl

Static hostname: node1 Icon name: computer-vm Chassis: vm Machine ID: 3d8bf6a9bfb24fae9bedcb9cfc6c5960 Boot ID: 75905236f9eb4d91ade64f99a690d329 Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-514.el7.x86_64 Architecture: x86-64 # Note: log in to the terminal again, and the command prompt will change to your modified hostname

# time synchronization (all four nodes should be synchronized)

[root@node1] # ntpdate 172.16.0.1 18 Jul 22:45:00 ntpdate [10418]: step time server 172.16.0.1 offset 0.708020 sec

# in addition, please set the firewall rules by yourself, and make sure that selinux is turned off

2 install redis,redis master-slave configuration

# download the redis rpm package

Redis-3.2.3-1.el7.x86_64.rpm

# install redis (all 4 nodes)

Yum install-y redis-3.2.3-1.el7.x86_64.rpm

# back up the configuration file (all 4 nodes)

Cp / etc/redis.conf {, .bak}

# modify the configuration file (all 4 nodes)

Vim redis.conf bind 192.168.0.51 # changed to IP of each node

# enable slave function on slave nodes (node2, node3, node4)

Vim / etc/redis.conf # # REPLICATION # slaveof slaveof 192.168.0.51 6379

# start redis-server (all 4 nodes are started)

Redis-server / etc/redis.conf

# check whether the service is started (demo node2, other self-testing)

[root@node2] # ss-tln State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.0.52 LISTEN 0128 *: 22 *: * LISTEN 0 100 127.0.0.1 LISTEN 25 *: * LISTEN 0 128: 22:: * LISTEN 0 100 1:25:: *

# Log in to redis on the host

[root@node1] # redis-cli-h 192.168.0.51 192.168.0.51 KEYS * (empty list or set)

# set a pair of key values for synchronous testing

192.168.0.51 amazing 6379 > SET test 'amazing' OK 192.168.0.51 get test "amazing"

# Log in to the other three slave servers

Redis-cli-h 192.168.0.52 redis-cli-h 192.168.0.53 redis-cli-h 192.168.0.54

# testing on three nodes

# node2 view key-value pair, has synchronized 192.168.0.52 get test 6379 > keys * 1) "test" 192.168.0.52 get test "amazing" # node3 view key-value pair, has synchronized 192.168.0.536379 > keys * 1) "test" node4 view key-value pair, has synchronized 192.168.0.54 test 6379 > keys * 1) "test"

This proves that the master-slave configuration of redis is basically realized.

# check the master-slave information on node1, and you can also see that the master-slave configuration has been implemented.

192.168.0.51 6379 > INFO replication # Replication role:master connected_slaves:3 slave0:ip=192.168.0.52,port=6379,state=online,offset=732,lag=0 slave1:ip=192.168.0.53,port=6379,state=online,offset=732,lag=0 slave2:ip=192.168.0.54,port=6379,state=online,offset=732,lag=0 master_repl_offset:732 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:731

3 sentinel cluster implementation (Sentinel)

Next, configure sentinel on the three slave nodes for failover.

# all three slave nodes need to be configured as follows

Cp / etc/redis-sentinel.conf {, .bak} vi / etc/redis-sentinel.conf sentinel monitor mymaster 192.168.0.51 6379 2 sentinel down-after-milliseconds mymaster 5000 sentinel failover-timeout mymaster 18000

# start the service

Redis-sentinel / etc/redis-sentinel.conf

# check whether the service starts normally

[root@node ~] # ss-tln State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *: 26379 *: * LISTEN 0 128 192.168.0.55 tln State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 6379 *: * LISTEN 0128 *: 22 *: * LISTEN 0 100 127.0.0.1 LISTEN 25 *: * LISTEN 0128: 26379 : * LISTEN 0 128: 22:: * LISTEN 0 100:: 1:25:: *

# simulated failure

# kill node1 redis process [root@node1 ~] # pkill redis [root@node1 ~] # ss-tln State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *: 111 *: * LISTEN 0 128 *: 22 *: * LISTEN 0 100 127.0.0.1 LISTEN 25 *: * LISTEN 0 128: 111: * LISTEN 0 128: 22: * LISTEN 0 100:: 1:25:: *

# Log in to node3, check the information, and find that node3 becomes master to achieve failover

[root@node3 ~] # redis-cli-h 192.168.0.53 192.168.0.53 INFO Replication # Replication role:master connected_slaves:2 slave0:ip=192.168.0.54,port=6379,state=online,offset=23900,lag=0 slave1:ip=192.168.0.52,port=6379,state=online,offset=23900,lag=0 master_repl_offset:24177 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:24176

# simulate the failure again

# Kill node3 redis [root@node3] # pkill redis [root@node3] # ss-tln State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0128 *: 22 *: * LISTEN 0100 127.0. 0.1 25 *: * LISTEN 0 128: 22:: * LISTEN 0 100:: 1:25:: *

# Log in to node4 to check the information and find that node4 is master,node2 and slave, and this is a master and a slave

[root@node4 ~] # redis-cli-h 192.168.0.54 192.168.0.54 INFO Replication # Replication role:master connected_slaves:1 slave0:ip=192.168.0.52,port=6379,state=online,offset=10508,lag=0 master_repl_offset:10508 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:10507

# start the redis service of node1 and node3 and return to normal state

[root@node1 ~] # redis-server / etc/redis.conf [root@node3 ~] # redis-server / etc/redis.conf

# node3 has become the master node, and now it is one master and three slaves

[root@node3 ~] # redis-cli-h 192.168.0.53 192.168.0.53 INFO Replication # Replication role:master connected_slaves:3 slave0:ip=192.168.0.51,port=6379,state=online,offset=8008,lag=0 slave1:ip=192.168.0.52,port=6379,state=online,offset=8146,lag=0 slave2:ip=192.168.0.54,port=6379,state=online,offset=7869 Lag=1 master_repl_offset:8146 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:8145

(the third article in the PS:Redis series will introduce the performance optimization of Redis)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report