Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Redis yum installation and configuration redis master-slave

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. Installation

Yum install epel-release.noarch-y

Yum install redis-y

two。 Configure master and slave

Lord:

Vim / etc/redis.conf # modify the configuration file

Bind 10.1.1.111 # modify listening IP

Requirepass 233233 # add password

Masterauth 233233

# Why do you want to add an authentication password? if the master dies, go online again. At this point, the master becomes a slave, it needs to authenticate other servers, other servers with passwords will not be able to pass authentication, so it is set up in advance.

Slave-priority 50 # increases the right to vote to ensure that the master is dead, and has sufficient permission to get master when switching the line again.

From:

Vim / etc/redis.conf # modify the configuration file

Bind 10.1.1.112 # modify listening IP

Requirepass 233233

# Slave should also set the same password as the master. If the master fails, 112 is promoted, and 113 is authenticated with the password of 111, so that 112 and 113 machines cannot connect.

Slaveof 10.1.1.111 6379 # specify the primary ip and port

Masterauth 233233 # specify the password of the master

The other one is also configured in this configuration.

Systemctl start redis # starts three hosts redis-cli-h 10.1.1.111-a 233233 # at the same time and logs in to the master server 10.1.111 6379 > INFO replication # to view master-slave information # Replicationrole:masterconnected_slaves:2slave0:ip=10.1.1.112,port=6379,state=online,offset=1135,lag=0slave1:ip=10.1.1.113,port=6379,state=online,offset=1135 Lag=1master_repl_offset:1135repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:113410.1.1.111:6379 >

Create a key value on the primary server

11.10.1.1.111 6379 > set ID 001OK10.1.1.111:6379 > get ID "001" 10.1.1.111VR 6379 >

Switch to view from the server

[root@cs112 ~] # redis-cli-h 10.1.1.112-a 233233 # Log in to server 10.1.112get ID # to view the ID key value "001" 10.1.1.112Plus 6379 >

Command configuration master-slave (command configuration is automatically synchronized to the configuration file)

[root@web1] # redis-cli-h 10.1.1.233

10.1.1.233config set masterauth 6379 > slaveof 10.1.1.111 6379OK 10.1.1.233config set masterauth 233233 OK10.1.1.233:6379 >

Log in to the master server to view the master and slave

10.1.1.111Jiang 6379 > INFO replication# Replicationrole:masterconnected_slaves:3slave0:ip=10.1.1.112,port=6379,state=online,offset=2602,lag=1slave1:ip=10.1.1.113,port=6379,state=online,offset=2602,lag=1slave2:ip=10.1.1.233,port=6379,state=online,offset=2602,lag=1master_repl_offset:2602repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:2

# as you can see, there are already three from the server here

Redis master-slave replication related configuration

The following are some adjustable parameters of the redis master-slave replication scenario, which need to be adjusted according to the actual environment.

Slave-serve-stale-data yes: can you combine stale data services with client-side slave-read-only yes: read-only from nodes Repl-diskless-sync no: whether to send data to multiple slave nodes at the same time repl-diskless-sync no: whether to send data to multiple slave nodes at the same time repl-diskless-sync no: delay time sent repl-ping-slave-period 10 detect slave node status repl-timeout 60 detect node timeout repl-disable-tcp-nodelay no: enable nodelayrepl-backlog-size 1mbslave-priority 100: slave node priority, replication cluster, master node failure Priority used when electing the primary node in the sentinel application scenario The smaller the number, the higher the priority, but 0 means not participating in the election; min-slaves-to-write 3: the master node only allows it to accept write operations when the number of slave nodes it can communicate is greater than or equal to the value here; min-slaves-max-lag 10: when the slave node delays longer than the time specified here, the master node will reject the write operation. 3. High availability

First find one to raise the priority from the server.

Vim / etc/redis.conf # modifies the 112th configuration, controls the election slave-priority 90 # to 90, the default is 100. the smaller the permission, the greater the privilege. 0 does not participate in the election systemctl restart redis # restart

Find three servers to configure sentinel service

Vim / etc/redis-sentinel.conf bind 10.1.1.112 # listens to IP, it is recommended not to use 0.0.0.0 will cause errors sentinel monitor mymaster 10.1.1.111 6379 2 # set master server ip sentinel auth-pass mymaster 233233 # authentication of master server # the other two have the same configuration except IP

Configured to start sentinel

How to check whether sentinel is running properly

Vim / etc/redis-sentinel.conf # Open the configuration file for sentinel

# check the last few lines. If sentinel is running normally, you can get other redis host information and other sentinel host information on the LAN, as shown in the figure.

test

Stop the redis service of the main server first.

Redis-cli-h 10.1.112-p 26379 # Log in to sentinel Service

10.1.1.112mymaster 26379 > SENTINEL masters # View master server status 1) 1) "name" 2) "mymaster" 3) "ip" 4) "10.1.1.113" 5) "port" 6) "6379" 7) "runid" 8) "7ee5fe0e808bd06638f0f4c365d95c7694c6770c" 9) "flags" 10) "master"

Above we can see that the master has been transferred to the 113 host, open other slave server configuration, we can find that the configuration file pointing to 10.1.1.111 master has been changed to 113.

Redis-cli-h 10.1.1.112-a 233233 # login to 112 slave server

127.0.0.1 purl 6379 > info replication# Replicationrole:slavemaster_host:10.1.1.113master_port:6379master_link_status:upmaster_last_io_seconds_ago:1master_sync_in_progress:0slave_repl_offset:1253slave_priority:70slave_read_only:1connected_slaves:0master_repl_offset:0repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0

# as can be seen above, the slave server 112 master has become a 113 server

# in summary, the role of the master server that goes online again becomes slave, unless the election is triggered again, and the permissions are higher than those of other machines.

A view sentinel command

[root@cs11 ~] # redis-cli-h 192.168.0.11-a 233233-p 26379 info | tail-1

# Log in to port 26379 (which is the port on which the default sentinel runs), run the info command, and take the last line

Master0:name=mymaster,status=ok,address=192.168.0.14:6379,slaves=2,sentinels=3

# indicates that master is 192.168.0.14 with 2 followers and 3 sentinels

Profile interpretation

Sentinel monitor mymaster 192.168.0.11 6379 2 # 192.168.0.11 is the IP address and port of master redis. 2 represents two sentinel (sentinels) who detect an anomaly and then determine that it is real fail. The name of the Mymaster host group can be defined as sentinel auth-pass mymaster 233233 # when a password is required for redis access, that is, when the requirepass entry is configured for redis.conf, you need to define this sentinel down-after-milliseconds mymaster 30000 # to specify how long sentinel monitors the redis instance to be abnormal, its status will be determined to down. If the actual business requires sentinel to determine the redis instance exception as soon as possible, it can be allocated appropriately, in millisecond sentinel can-failover mymaster yes # after sentinel detects O_DOWN, whether to initiate failover mechanism sentinel parallel-syncs mymaster 1 # for this redis to perform failover, the maximum number of slaves can synchronize the new master server at the same time. The smaller the number, the longer the failover will take. But larger means that more slave servers are unavailable because of replication. You can ensure that only one slave server at a time is in a state where command requests cannot be processed by setting this value to 1. Failover-timeout mymaster900000 # if sentinel fails to complete the failover operation within this configuration value (that is, master/slave automatically switches over in case of failure), it is considered that this failover failed sentinel notification-script mymaster1 / root/ notification. Sh # the script that needs to be executed when the master / slave state is switched when sentinel is triggered. When the main down, you can notify the party 4. The construction of cluster

Here we are going to use a machine to simulate nine hosts, each redis port is different, the real environment, ip is different

Mkdir / date/700 {1, 2, 3, 4, 5, 6, 7, 7, 9}-p # create 9 folders to store configuration log, etc.

Vim redis.conf # first modify a template to change the configuration file cp to the following

Port 6379 bind 127.0.0.1 daemonize yes # redis background run cluster-enabled yes # enable configuration of cluster cluster-config-file nodes_6379.conf # cluster, configuration file starts automatic generation of cluster-node-timeout 8000 # request timeout for the first time, default is 15 seconds, you can set appendonly yes # to enable aof persistence mode Each write request is appended to the logfile "/ data/6379/redis.log" # log path pidfile / var/run/redis_6379.pid # pid path in the appendonly.aof file

The # port is still the template that should be modified with 6379, which will be modified later with sed.

For ((item1witi cluster replicate d7ff265c7293735c5bbf9c5ef34d2bc54fe1a3ea

. / redis-cli-c-p 7001 cluster nodes # take a look with the command to view the node

# using the command to view the node, you can find that 7011 has become slave, followed by the ID number of 7010, indicating that 7011 is the slave of 7010.

Add possible error reports and solutions

Delete the primary node

Because there are slots on the primary node that will lose data if deleted, so we have to move the slots to other primary nodes first.

. / redis-cli-- cluster reshard 127.0.0.1 7001 # startup program slot

How many slots do you want to move (from 1 to 16384)? how many slots do you want to move (from 1 to 16384)? What is the receiving node ID? what is the ID of the 4ef4f52ddff10e66eff50f40d765bb390ab935dd receiving node? 4ef4f52ddff10e66eff50f40d765bb390ab935ddPlease enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. # Please enter all source node ID. # Type "all" to use all nodes as source nodes for hash slots. (2000 of all master slots are extracted) # after entering all source node ID, type "done". (only one ID slot is extracted, here we want to empty the 7003 slot) Source node # 1:dfe430fa679ef9a38aa5e072df4c957b3ed9a92d # Source Node 1:dfe430fa679ef9a38aa5e072df4c957b3ed9a92dSource node # 2: done# Source Node 2: Ready to move 2000 slots.# is ready to move 2000 slots. Source nodes: # Source node: M: dfe430fa679ef9a38aa5e072df4c957b3ed9a92d 127.0.0.1 master 7003 slots: (0 slots) how many slots are there after the move is completed? how many slots are there? M: # destination node M: 4ef4f52ddff10e66eff50f40d765bb390ab935dd 127.0.0.1 slots 7002 slots: [0-3115], [5782-8857], [10181-11698], [12181-12953] (8483 slots) after the move is completed How many slots are there 4 additional replica (s) # 4 other copies Resharding plan: # sharding plan: Do you want to proceed with the proposed reshard plan (yes/no)? Yes#, do you want to continue with the proposed sharding plan (yes / no)? Yes

# use the cluster nodes command to ensure that the primary node to be deleted has no slots before you can perform the following delete operation

. / redis-cli-- cluster del-node 127.0.0.1 dfe430fa679ef9a38aa5e072df4c957b3ed9a92d 7003

# to delete 7003 nodes, enter ip+ port + ID

Delete slave node

Delete from node 7009

. / redis-cli-- cluster del-node 127.0.0.1 088ace239a35fbbbb8f7fe68618331383c762c6b 7009

# since 7009 nodes are only slave nodes and have no slots, you can delete them directly

# viewing with cluster nodes, you can find that 7003 and 7009 nodes have been deleted

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report