In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "how to deploy clusters in redis". In daily operation, I believe many people have doubts about how to deploy clusters in redis. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "how to deploy clusters in redis". Next, please follow the editor to study!
1. Redis master-slave architecture
1.1 principle of master-slave replication
Connect to the master server from the slave server and send the PSYNC command
After receiving the PSYNC naming, the master server starts executing the BGSAVE command to generate the RDB file and uses a buffer to record all write commands executed thereafter
After the master server BGSAVE executes, it sends snapshot files to all slave servers and continues to record the write commands executed during the sending period.
After receiving the snapshot file from the server, discard all old data and load the received snapshot
After the master server snapshot is sent, it begins to send write commands in the buffer to the slave server.
Finish loading the snapshot from the server, receive command requests, and execute write commands from the autonomous server buffer; (initialization from the server is complete)
Each time the master server executes a write command, it sends the same write command to the slave server, receives and executes the received write command (the operation after initialization from the slave server)
When the connection between master and slave is disconnected for some reason, slave can automatically reconnect the Master. If master receives multiple slave concurrent connection requests, it will only persist once instead of one connection, and then send this persistent data to multiple concurrent connected slave.
1.2. Advantages and disadvantages of master-slave replication
Advantages:
Master-slave replication is supported, and the host automatically synchronizes the data to the slave, allowing read-write separation.
In order to offload the read operation pressure of Master, the Slave server can provide read-only service for the client, and the write service must still be done by Master.
Slave can also accept connection and synchronization requests from other Slaves, which can effectively offload the synchronization pressure of Master.
Master Server provides services to Slaves in a non-blocking manner. So during Master-Slave synchronization, the client can still submit queries or modify requests.
Slave Server also completes data synchronization in a non-blocking manner. During synchronization, if a client submits a query request, Redis returns the data before synchronization
Disadvantages:
Redis does not have automatic fault tolerance and recovery features, and the downtime of the host slave will cause the read and write requests of the frontend to fail. You need to wait for the machine to restart or manually switch the IP of the frontend to recover.
The host is down, some of the data can not be synchronized to the slave in time before the downtime, and the problem of data inconsistency will be introduced after switching IP, which reduces the availability of the system.
It is difficult for Redis to support online expansion, and it will become very complicated when the cluster capacity reaches the upper limit.
1.3.The master-slave architecture of redis is built, and the steps of configuring slave nodes are described.
6380 is used here as master nodes 6381 and 6382 as slave nodes
# 1. Create a directory conf/master-slave-cluster to store the configuration information of the corresponding cluster mkdir-p conf/master-slave-cluster# create a directory data to store the corresponding data information (data directory) mkdir-p / usr/local/redis/data/6380 mkdir-p / usr/local/redis/data/6381 mkdir-p / usr/local/redis/data/6382 # copy a redis.conf file to rename redis-6381.conf# 2, Modify the relevant configuration to the following values: # modify the port number port 638 write the pid process number to the pidfile configuration file pidfile / var/run/redis_6381.pid # specify the log storage directory logfile "6381.log" # specify the data storage directory dir / usr/local/redis-5.0.3/data/6381 # need to comment out bind# bind 127.0.0.1 (bind binds the ip of your machine network card If there are multiple network cards that can be equipped with multiple ip, it means which network card ip of the machine the client is allowed to access. The private network generally does not need to configure bind and can be commented out) # 3. Configure master-slave replication (6380 is master does not need to be configured as follows, just configure 6381 and 6382 with the following attributes) # copy data from the redis instance of native 6379 Before Redis 5.0, use slaveofreplicaof xxx.xxx.xxx.xxx 6380 # to configure slave node read-only replica-read-only yes # 4, start slave node redis-server redis-6381.conf5, connect slave node redis-cli-p 63816, test whether data is written on 6380 instance, whether 6381 instance can synchronize newly modified data in time 7, configure a 6382 slave node 1.4 in the same way, and verify the result.
View the master-slave cluster of redis
Master operation
# Connect master [root@ip redis] # src/redis-cli-p 6380127.0.0.1 src/redis-cli 6380 > auth xiu123OK127.0.0.1:6380 > set name "zhangsan" OK127.0.0.1:6380 > get name "zhangsan"
Slave operation
# Connect slavesrc/redis-cli-p 6381127.0.0.1 zhangsan 6381 > get name "zhangsan" # Slave nodes can only read 127.0.0.1 zhangsan 6381 > set name lisi (error) zhangsan >! [insert picture description here] (https://img-blog.csdnimg.cn/36d9e96b1cae498fad03e04d695997c2.png#pic_center)
1.5. Partial data replication
When master and slave are disconnected and reconnected, the entire data is usually replicated. However, starting from the redis2.8 version, redis uses the command PSYNC that can support partial data replication to master synchronized data. Slave and master can only copy part of the data after the network connection is disconnected and reconnected (continue transmission at breakpoint).
Master creates a cache queue for replicating data in its memory, caching recent data, and master and all its slave maintain the replicated data subscript offset and master process id, so when the network connection is disconnected, slave will request master to continue with the outstanding replication, starting with the recorded data subscript. If the master process id changes, or if the subscript offset from the node data is so old that it is no longer in the cache queue of master, then a full data copy will occur.
Master-slave replication (partial replication, breakpoint continuation) flowchart:
If there are many slave nodes, in order to alleviate the master-slave replication storm (multiple slave nodes replicate the master node at the same time resulting in excessive pressure on the master node), you can do the following architecture, so that some slave nodes synchronize data with the slave node (synchronized with the master node)
2. Redis Sentinel High availability Architecture
Sentinel Sentinel is a special redis service that does not provide read / write services and is mainly used to monitor redis instance nodes. The role of the Sentinel is to monitor the operation of the Redis system. Its functions include the following two
(1) monitor whether the master server and slave server are running properly.
(2) when the master server fails, the slave server will be automatically converted to the master server.
2.1. The way Sentinels work
Under the Sentinel architecture, the client terminal finds the redis master node from the Sentinel for the first time, and then directly visits the redis master node. When the redis master node changes, the Sentinel will immediately perceive it and notify the client of the new redis master node.
Sentinel will regularly execute info commands on the master it monitors to obtain the latest master-slave relationship, and periodically send ping heartbeat detection commands to all redis nodes. If it detects that a master cannot respond, it will send a message to other Sentinel, subjectively believing that the master is down. If the Sentinel cluster agrees that the number of master offline reaches a value, then everyone agrees that the master will be offline.
What you need to do before going offline is to find one of the Sentinel clusters to perform the offline operation. This step is called leader election. When selected, you will choose a suitable one from all the slave nodes of the master as the new master, and ask other slave to resynchronize the new master.
If there are not enough Sentinel processes to allow the Master master server to go offline, the objective offline state of the Master master server will be removed. If the Master master server re-sends the PING command to the Sentinel process to return a valid reply, the subjective offline state of the Master master server will be removed.
Three scheduled tasks
Sentinel has three scheduled tasks internally
1) every 10 seconds, each sentinel executes info commands on master and slave. This task serves two purposes:
A) Discovery slave node
B) confirm the master-slave relationship
2) every 2 seconds, each sentinel exchanges information (pub/sub) through the channel of the master node. There is a publish subscription channel (sentinel:hello) on the master node. Sentinel nodes exchange information (their "views" and their own information) through the _ _ sentinel__:hello channel to reach a consensus.
3) every second, each sentinel performs ping operations (mutual monitoring) on other sentinel and redis nodes. This is actually a heartbeat detection and the basis for failure determination.
2.2. Advantages and disadvantages of Sentinel mode
Advantages:
Sentinel mode is based on master-slave mode, all the advantages of master-slave mode, Sentinel mode has.
The master and slave can be switched automatically, so the system is more robust and more available.
Disadvantages:
It is difficult for Redis to support online expansion, and it will become very complicated when the cluster capacity reaches the upper limit.
2.3.The step of building redis sentinel architecture 2.3.1, configuring sentinel.conf file # 1, Copy a sentinel.conf file mkdir sentinelcp sentinel.conf sentinel-26380.conf# protected mode protected-mode no# port number port 2638 whether to silently start the daemonize yes# pid process number pidfile "/ var/run/redis-sentinel-26380.pid" # log file logfile "/ usr/local/redis/data/6380/sentinel.log" # Sentinel service data store dir "/ usr/local/redis/data" # Sentinel Monitoring sentinel monitor # Master node ip will change after failover sentinel monitor mymaster 182.92.189.235 6380 "Connect master node password # set the password when connecting master and slave Note that sentinel cannot set different passwords for master and slave, so the passwords for master and slave should be the same. Sentinel auth-pass mymaster xiu123#sentinel config-epoch mymaster 9#sentinel leader-epoch mymaster slave automatically generates slave node information but there is no automatic generation of sentinel known-slave mymaster 182.92.189.235 6381sentinel known-slave mymaster 182.92.189.235 638 automatic generation configuration startup back to automatically generate some configurations 2.3.2, Start Sentinel Service instance # launch sentinel Sentinel instance src/redis-sentinel sentinel-26380.conf# to view the info information of sentinel src/redis-cli-p 26379127.0.1 info 26379 > info# you can see that the master and slave of redis have been identified in the info of Sentinel. # similarly, add two sentinel again Ports 26381 and 26382 start in the same way. Note that the corresponding numbers in the above configuration files need to be modified.
After the sentinel cluster is started, the metadata information of the Sentinel cluster will be written to all sentinel configuration files (appended to the bottom of the file). Let's take a look at the following configuration file sentinel-26380.conf, as shown below:
2.3.3. Redis Sentinel Mode failover "shell# 1. View the current redis cluster service: one master, two slaves, three sentinels [root@iZ2ze505h9bgsa1t9twojyZ redis] # ps-ef | grep redisroot 1166 30926 0 22:43 pts/2 00:00:00 grep-color=auto redisroot 28998 10 21:12? 00:00:06 src/redis-server *: 6380root 29010 10 21:12? 00:00:06 src/redis-server *: 6381root 29020 10 21:12? 00:00:06 src/redis-server *: 6382root 31686 10 22:05? 00:00:05 src/redis-sentinel *: 26380 [sentinel] root 32553 10 22:22? 00:00:03 src/redis-sentinel *: 26381 [sentinel] root 32562 10 22:22? 00:00:03 src/redis-sentinel *: 26382 [sentinel] [root@iZ2ze505h9bgsa1t9twojyZ redis] # src/ Redis-cli-p 6380127.0.0.1 redis-cli > auth xiu123OK127.0.0.1:6380 > info replication# Replicationrole:masterconnected_slaves:2slave0:ip=182.92.189.235 Port=6381,state=online,offset=261525,lag=0slave1:ip=182.92.189.235,port=6382,state=online,offset=261525,lag=1... Omit part of the code 127.0.0.1 kill 6380 > quit# kill redis [root@iZ2ze505h9bgsa1t9twojyZ redis] # kill-9 2899 check the log [root@iZ2ze505h9bgsa1t9twojyZ redis] # tail-f data/6380/sentinel.log # the Sentinel thinks that the subjective referral 31686VOV X 12 Nov 2021 22root@iZ2ze505h9bgsa1t9twojyZ redis 45RV 40.110 # + sdown master mymaster 182.92.189.235638 reach the subjective downline threshold, then objectively deactivate 31686RX 12 Nov 2021 22RV 45R 40.181 # + odown master mymaster 182.92.189.235 638 "quorum 2 + 231686 Nov X 2021 22 Nov 45V 40.181 # + new-epoch" attempt to fail over 31686 Nov X 12 Nov 2021 22 45 Nov 40.181 # + try-failover master mymaster 182.92.189.235 638 "Voting for the primary node 31686Nov 12 Nov 2021 22 Nov 45V 40.189 # + vote-for-leader ba9eed52de8664c3fd8d76d9728b42a309c3401b" Select the primary node 638131686Nov X 12 Nov 2021 22new-epoch 4541.362 # + switch-master mymaster 182.92.189.235 6382 182.92.189.235 638 viewing new master node information master node 6381 slave node 6382 [root@iZ2ze505h9bgsa1t9twojyZ redis] # src/redis-cli-p 6381127.0.1 src/redis-cli 6381 > auth xiu123OK127.0.0.1:6381 > info replication# Replicationrole:masterconnected_slaves:1slave0:ip=182.92.189.235 Port=6382,state=online,offset=469749,lag=0 "3. Redis High availability Cluster 3.1, High availability Cluster Mode
3.2. Redis-Cluster cluster
Redis's sentinel mode can basically achieve high availability and read-write separation, but in this mode, every redis server stores the same data, which is a waste of memory, so cluster mode is added to redis3.0 to achieve redis distributed storage, that is, different content is stored on each redis node.
Redis-Cluster adopts a centerless structure, which has the following characteristics:
All redis nodes are interconnected with each other (PING-PONG mechanism), and binary protocols are used internally to optimize transmission speed and bandwidth.
The fail of a node takes effect only when it is detected by more than half of the nodes in the cluster.
The client is directly connected to the redis node and does not need an intermediate proxy layer. The client does not need to connect to all the nodes in the cluster, but to any of the available nodes in the cluster.
Mode of work:
On every node of the redis, there are two things, one is the slot (slot), its value range is: 0-16383. Another is cluster, which can be understood as a plug-in for cluster management. When our access key arrives, redis will get a result (hash function) according to crc16's algorithm, and then calculate the remainder of the result to 16384, so that each key will correspond to a hash slot numbered between 0 and 16383, through this value, to find the corresponding node corresponding to the slot, and then automatically jump to the corresponding node for access operation.
To ensure high availability, the redis-cluster cluster introduces the master-slave mode, in which a master node corresponds to one or more slave nodes. When the master node goes down, the slave node is enabled. When other master nodes ping a master node A, if more than half of the master nodes time out to communicate with A, then the master node An is considered to be down. If both master node An and its slave node A1 are down, the cluster can no longer provide services.
Redis cluster is a distributed server farm composed of multiple master-slave node groups, which has the characteristics of replication, high availability and fragmentation. Redis clusters do not need sentinel sentinels to perform node removal and failover functions. Each node needs to be set to cluster mode, which has no central node, can be scaled horizontally, and can scale linearly to tens of thousands of nodes (no more than 1000 nodes are officially recommended). Redis cluster has better performance and high availability than the previous version of Sentinel mode, and the cluster configuration is very simple.
3.3.3.The Redis high availability cluster 3.3.1, redis cluster build
Redis cluster requires at least three master nodes. Here, we build three master nodes and another slave node for each master, with a total of 6 redis nodes. Here, six redis instances are deployed with three machines, each machine has one master and one slave. The steps to build the cluster are as follows:
6383 (master) 6384 (slave)
6385 (master) 6386 (slave)
6387 (master) 6388 (slave)
Node configuration # whether to start daemonize yes# port number port 638 quietly pid process file pidfile / var/run/redis_6383.pid# data store dir / usr/local/redis/data/redis-cluster/6383/# specified log storage directory logfile "/ usr/local/redis/data/cluster-6383.log" # whether to start cluster mode cluster-enabled yes# (cluster node information file Here it is better to correspond to port) cluster-config-file nodes-6383.confcluster-node-timeout 1000 disabled protected mode protected-mode no to create a cluster
After the redis cluster is configured, you need to use the ruby script to create the cluster before version 5.x, but you can execute the create cluster command through redis-cli after 5.x.
# 1 in the command to start redis instance src/redis-server conf/cluster/638*/redis.conf# separately means to create a slave server node for each created master server node # to execute this command, you need to confirm that the redis instances between the three machines can access each other. You can simply turn off all machine firewalls first. If you do not turn off the firewall, you need to open the redis service port and the cluster node gossip communication port 16379 (the default is to add 1W to the redisport number) # close the firewall # systemctl stop firewalld # temporarily close the firewall # systemctl disable firewalld # prohibit boot # Note: the following command to create a cluster does not copy directly There may be a problem with the space coding in it, which may lead to the unsuccessful creation of the cluster. # this test does not use 127.0.0.1 for remote connections. If remote connections are involved, you need to set a real public network ip#-a password. -- cluster create to create a cluster-- cluster-replicas 1 each master establishes a slave node that selects 3 of the 6 instances of the slave node as the slave nodes of the other 3 master nodes, and finally becomes 3 master 3 slave src/redis-cli-a password-- cluster create-- cluster-replicas 1 127.0.0.1 src/redis-cli-a slave-- cluster create-- master 1 127.0.0.1 src/redis-cli-a slave-- master-- master 1 127.0.0.1 src/redis-cli-a slave-- slave-- master-- master 1 127.0.0.1 src/redis-cli-a slave-- slave-- master-- master 1 127.0.0.1 src/redis-cli-a slave-- master-- master 1 127.0.0.1 slave-a slave-- master 1 127.0.0.1 slave-- slave-
Question:
# 1. This is because data has been inserted into a service in the cluster and the persistence file has been generated. To re-create the cluster, you need the flushall command to clear all data [ERR] Node 127.0.0.1 is not empty 6383. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database verification test using flush is not good as long as you find the configuration file corresponding to the data storage directory violence rm deletion # 2, log in to a cluster for CLUSTERDOWN Hash slot not served reasons, and do not perform cluster creation verification cluster after starting the cluster instance
To read and write key, you need to hash the key without using cluster mode to log in to the client. If we only access the client in cluster mode, it will prompt us to operate on the corresponding node.
Cluster landing
# connect to any client:. / redis-cli-c-h-p (- an access server password,-c indicates cluster mode, specify ip address and port number) #-a password-c cluster mode-h ip-p portsrc/redis-cli-a password-c-h 127.0.0.1-p 6383
# verify: cluster info (view cluster information), cluster nodes (view node list) # verify data operation # close the cluster one by one, use the command: src/redis-cli-a password-c-h 127.0.0.1-p 638 * shutdown3.3.2, cluster failover
The above-mentioned cluster has three masters and three slaves. 6386, 6387 and 6388 correspond to the slaves of 6383, 6384 and 6385 of the master node, respectively. If a master node goes down, the slave node will be automatically elected as the master node to continue to provide services, and a certain fault-tolerant mechanism ensures high availability. Note that when there is a slave node, the master and slave nodes do not have read-write separation, and both read and write use the master node.
# simulated redis failover # login node found that name key on 6384 age this key on 6383 wdih this key on 6385 [root@iZ2ze505h9bgsa1t9twojyZ redis] # src/redis-cli-a xiu123-c-h 127.0.0.1-p 6383127.0.1key 6384 > get name- > Redirected to slot [5798] located at 127.0.0.1 age 6384 (nil) 127.0.0.1 age 6384 > get age- > Redirected to slot [741] Located at 127.0.0.1 located at 6383 "12" 127.0.1 located at 6383 > get width- > Redirected to slot [15983] Redirected to slot 127.0.1 Redirected to slot 6385 > quit## kill 6385 this master node [root@iZ2ze505h9bgsa1t9twojyZ redis] # kill-9 1418 log in to the cluster to obtain age, Name or the original node acquisition width was transferred from 6385 to 6388 to view the 6380 node information and found that it became the primary node [root@iZ2ze505h9bgsa1t9twojyZ redis] # src/redis-cli-a xiu123-c-h 127.0.0.1-p 6383127.0.1 xiu123 6383 > get age "12" # here the cluster is temporarily unavailable (guess) 127.0.0.1xiu123 6383 > get name (error) CLUSTERDOWN due to the election of the master node The cluster is down127.0.0.1:6383 > get name- > Redirected to slot [5798] located at 127.0.0.1 Redirected to slot 6384 "xieqx" 127.0.0.1 error 6384 > get width- > Redirected to slot [15983] located at 127.0.0.1 error 6388 "110" 127.0.1 error 6388 > info replication# Replicationrole:master# kill 6388, then the whole cluster service is not available. Cluster dynamic scaling-down # before replication, 6383-node configuration creates 6389 and 6390 nodes and starts the instance-cluster expansion-# 1, add master node # # add-node: followed by the newly added * master and a node NODE_ID***src/redis-cli of the cluster-- cluster add-node 127.0.0.1 master 127.0.1master 6389 127.0.1master 6383-a password# 2, Add a slave node # for the added master node #-cluster-slave indicates that the slave node # # add-node is added: followed by the master NODE_ID***#--cluster-master-id corresponding to the newly added * slave and slave: the node IDsrc/redis-cli that represents the master corresponding to the slave-- cluster add-node 127.0.0.1 cluster add-node 6390 127.0.1 cluster-slave-- cluster-slave-- cluster-master-id 353662f6868b187ad15bad9b7271b8f0848adf10-a password# 3, Re-sharding slot#-cluster-from: indicates the node ID of the node where the slot is currently located Multiple ID are separated by commas #-- cluster-to: indicates the number of node ID that requires newly assigned nodes (it seems that only one can be assigned at a time) #-- cluster-slots: the number of slot assigned src/redis-cli-- cluster reshard 127.0.0.1 slot 6389-- cluster-from 47318cef1195f4281b7815bf66a41e31d68b6d16re0dbea2fff1554a3bbca70d28b81911e60c5bee6dforce 2fd29d61e867cb85e2e368ee62aebef33e7aaeb3-cluster-to 353662f6868b187ad15bad9b7271b8f0848adf10-cluster-slots 1024-a password#
-Cluster downsizing-# offline node 127.0.0.1 master / 127.0.0.1 slave#del-node 6390 (slave) # (1) first delete the slave#del-node corresponding to master followed by the ip:port and node IDsrc/redis-cli of slave node-- cluster del-node 127.0.0.1 slave#del-node 6390 353662f6868b187ad15bad9b7271b8f0848adf10-a password# (2) clear the slot of master to reallocate the slot of an offline node In the other three nodes, the # reshardsubcommand has been introduced earlier. One thing to note here is that since our cluster has a total of four master nodes, and each reshard can only write one destination node, the above command needs to be executed three times (--cluster-to corresponds to a different destination node). #-cluster-yes: do not echo the slot to be migrated, but migrate directly. Src/redis-cli-- cluster reshard 127.0.0.1 cluster-slots 6389-- cluster-from 353662f6868b187ad15bad9b7271b8f0848adf10-- cluster-to 0dbea2fff1554a3bbca70d28b81911e60c5bee6d-- cluster-slots 1024-- cluster-yes# (3) offline (delete) Node Primary Node src/redis-cli-- cluster del-node 127.0.0.1 cluster-from 353662f6868b187ad15bad9b7271b8f0848adf10 6389 353662f6868b187ad15bad9b7271b8f0848adf10
At this point, the study on "how to deploy the cluster in redis" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.