Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Redis Cluster deployment and its principle

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Details of Redis cluster architecture:

1. All Redis nodes are interconnected with each other (PING-PONG mechanism). Binary protocol is used internally to give priority to transmission speed and bandwidth.

2. The fail of nodes will not take effect until more than half of the primary (master) nodes in the cluster detect failures.

3. The client is directly connected to the redis node and does not need an intermediate agent (proxy) layer. The client does not need to connect all the nodes in the cluster, but can connect to any available node in the cluster.

4. Redis-cluster maps all physical nodes to [0-1638] slot, and cluster is responsible for maintaining node.

< - >

Slot

< - >

Key .

II. Redis-cluster election:

The election process is that all master in the cluster participate. If more than half of the master nodes time out to communicate with the current master node (cluster-node-timeout), the current master node is considered to be dead. The following two situations are when the entire cluster is unavailable (cluster_state:fail), when the cluster is unavailable, all operations to the cluster are not available, and a ((error) CLUSTERDOWN Thecluster is down) error is received:

If any master in the cluster dies and the current master does not have slave, the cluster enters the fail state, and it can also be understood that the cluster enters the fail state when the slot mapping [0-16383] of the cluster is incomplete.

If more than half of the master in the cluster is down, the cluster enters the fail state with or without slave.

By default, the node of each cluster uses two TCP ports, one is 6379, the other is 16379bot 6379 serving client connections, and 16379 is used for cluster bus, which is the node-to-node communication channel using the binary protocol. Nodes use cluster bus for fault detection, configuration update, failover authorization, and so on.

Redis clustering principle:

1. Redis cluster architecture:

Redis Cluster uses virtual slot partitioning to map all data into integer slots of 0room16384 according to the algorithm.

Redis Cluster is a centerless structure.

Each node holds the data and the state of the entire cluster

2. Cluster role:

Assign slots between Master:Master

Slave:Slave synchronizes data to the Master it specifies

3. TCP ports used by cluster nodes

Port 6379 is used for client connection

Port 16379 is used for cluster bus

The Redis3.0 version begins to support clustering, using hash slot (hash slot), which can consolidate multiple Redis instances together to form a cluster, that is, spread data across multiple servers in the cluster.

.

Redis cluster (Redis cluster) is an acentric structure, as shown in the following figure, each node holds the data and the state of the entire cluster. Each node will save the information of other nodes, know the slot that other nodes are responsible for, and send heartbeat information regularly with other nodes, which can sense the abnormal nodes in the cluster in time.

When the client sends a command related to the database key to any node in the cluster, the node receiving the command calculates which slot the data to be processed by the command belongs to and checks whether the slot is assigned to itself. if the slot where the key is located happens to be assigned to the current node, then the node directly holds the command If the slot in which the key value is located is not assigned to the current node, the node returns a MOVED error to the client, directs the client to redirect the correct node, and sends the command you wanted to execute again.

.

The cluster roles are master and slave. Slots is allocated between master with a total of 16384 slot. Slave synchronizes data to the master specified by it for backup. When one of the master fails to provide service, the slave of that master is promoted to master to ensure the integrity of the intercluster slot. When one of the master and its slave fails, resulting in incomplete slot and cluster failure, it is necessary for the operation and maintenance staff to deal with it.

.

After the cluster is built, each node in the cluster will periodically send PING messages to other nodes. If the node receiving the PING message does not return the PONG message within the specified time, then the node sending the PING message will mark it as suspected offline (PFAIL). Each node exchanges the status information of each node in the cluster by sending messages to each other. If more than half of the primary nodes already in a cluster report a primary node x as suspected offline, then the primary node x will be marked as FAIL and a FAIL message about the primary node x will be broadcast to the cluster, and all nodes that receive the FAIL message will immediately mark the primary node x as offline.

.

When we need to reduce or increase the number of servers in the cluster, we need to change the slot that has been assigned to one node (source node) to another node (target node). And move the key-value pairs contained in the relevant slots from the source node to the target node.

.

The refragmentation of the Redis cluster is performed by Redis's cluster management software redis-trib, which does not support automatic fragmentation and requires you to calculate how much Slot is migrated from which nodes. During the refragmentation process, the cluster does not need to go offline, and both the source and destination nodes can continue to process command requests.

Preparatory work:

1. Six servers, three are master and three are slave. The IP address of centos 7 is 192.168.1.10 and the IP address is 192.168.1.10. The number of servers participating in the cluster is preferably even, and each master will automatically correspond to a slave. If it is odd, the cluster cannot achieve redundancy, because there must be a master that does not have a corresponding slave. Once the master goes down, the whole cluster will lose some data.

2. Required source code package: https://pan.baidu.com/s/12L6jNOBrXeLH4I445_uUtQ extraction code: smn6

3. All redis servers must be guaranteed to be free of any data, preferably newly installed, because if any data exists, an error will be reported when clustering later.

4. Configure the firewall to release traffic. I am lazy, so it is directly closed here.

Start deployment:

Configuration on 192.168.1.10:

[root@localhost media] # lsredis-3.2.0.gem redis-3.2.9.tar.gz [root@localhost media] # cp * / usr/src/ # copy the package [root@localhost media] # cd / usr/src/ [root@localhost src] # ls # confirm that it's all there, debug kernels redis-3.2.0.gem redis-3.2.9. Tar.gz [root@localhost src] # tar zxf redis-3.2.9.tar.gz [root@localhost src] # cd redis-3.2.9/ [root@localhost redis-3.2.9] # make & & make install # compile and install [root@localhost redis-3.2.9] # cd utils/ # and enter a subdirectory [root@localhost utils] # / install_server.sh # you can enter all the way # because make install only installs binaries into the system There are no startup scripts and configuration files, so you need to set up the relevant configuration files required by the redis service through install_server.sh. [root@localhost utils] # cd / etc/init.d/ # optimize redis control start and stop mode [root@localhost init.d] # mv redis_6379 redis [root@localhost init.d] # chkconfig-- add redis # add redis as a system service [root@localhost init.d] # systemctl restart redis # restart service To test whether it works [root@localhost /] # vim / etc/redis/6379.conf # modify the configuration file Modify the following... bind 192.168.1.10 / / set the listening IP address daemonize yes logfile / var/log/redis_6379.log / / specify the log file cluster-enabled yes / / start the cluster cluster-config-file Nodes-6379.conf / / Cluster profile cluster-node-timeout 15000 / / Node timeout Default is millisecond cluster-require-full-coverage no / / change yes to noport 6379 / / listening port # Save exit

Don't rush to start the service after the main configuration file has been modified, because we need to install redis on each server, install it as before, and then modify the configuration file. Each server has to be modified, but the listening IP address is different, and the other configurations are the same. So, hey, hey.

Configuration of 192.168.1.20:

# after installing redis, [root@localhost utils] # scp root@192.168.1.10:/etc/redis/6379.conf / etc/redis/# We will directly copy the master configuration file of the first one and use it, and modify the listening IP. [root@localhost utils] # vim / etc/redis/6379.conf.... bind 192.168.1.20

Then configure the remaining servers in turn.

Go back to the 192.168.1.1 configuration:

Use a script to create a cluster:

[root@localhost /] # yum-y install ruby rubygems # A script for ruby is used to create a cluster. Before creating a cluster, you need to install the running environment and client of ruby. You can install it on any server [root@localhost src] # gem install redis--version 3.2.0 # this file redis-3.2.0.gem is required to execute this command. So before execution, change to the directory that owns this file, Successfully installed redis-3.2.0Parsing documentation for redis-3.2.0Installing ri documentation for redis-3.2.01 gem installed [root@localhost src] #. / redis-trib.rb create-- replicas 1\ > 192.168.1.10 replicas 6379\ > 192.168.1.20 redis-trib.rb create 6379\ > 192.168.1.30 redis-trib.rb create 6379\ > 192.168.1.40 redis-trib.rb create 6379\ > 192.168.1.50: 6379\ > 192.168.1.60 nodes...Using 6379. 20 nodes...Using 6379192.168.1.30 to 192.168.1.10:6379Adding replica 6379 192.168.1.40 to 192.168.1.10:6379Adding replica 6379 192.168.1.50 192.168.1.50 nodes...Using 6379 192.168.1.60 masters:192.168.1.10:6379192.168.1.20:6379192.168.1.30:6379Adding replica 192.168.1.406379 192.168.1 .50 slots:0 6379 to 192.168.1.20:6379Adding replica 192.168.1.60 to 192.168.1.30 slots 6379M: 4234dad1a041a91401d6e635c800581172e850dc 192.168.1.10 slots:0-5460 (5461 slots) masterM: b386e4089c6f45a59d371549cda306669dd6938f 192.168.1.20 slots masterM: c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8 192.168.1.30 slots:10923: 16383 (5461 slots) masterS: e772a17543efb1e11cd05d792c11319b0fbfee5f 192.168.1.40 replicates 4234dad1a041a91401d6e635c800581172e850dcS: f0a387bf5f366de0e25c575588349bd424a0ff90 192.168. 1.50:6379 replicates b386e4089c6f45a59d371549cda306669dd6938fS: abc2fef19988e6626243feff831bced36b83b642 192.168.1.60:6379 replicates c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8Can I set the above configuration? (type 'yes' to accept): yes # remember here yes > > Nodes configuration updated > > Assign a different config epoch to each node > > Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join. > > Performing Cluster Check (using node 192.168.1.10 slots:0 6379) M: 4234dad1a041a91401d6e635c800581172e850dc 192.168.1.10 slots:0-5460 (5461 slots) master 1 additional replica (s) M: c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8 192.168.1.30 slots:0 6379 slots:10923-16383 (5461 slots) ) master 1 additional replica (s) M: b386e4089c6f45a59d371549cda306669dd6938f 192.168.1.20slots 6379 slots:5461-10922 (5462 slots) master 1 additional replica (s) S: f0a387bf5f366de0e25c575588349bd424a0ff90 192.168.1.50 slots:5461 6379 slots: (0 slots) slave replicates b386e4089c6f45a59d371549cda306669dd6938fS: e772a17543efb1e11cd05d792c11319b0fbfee5f 192.168.1.40 slave replicates b386e4089c6f45a59d371549cda306669dd6938fS 6379 slots: (0 slots) slave replicates 4234dad1a041a91401d6e635c800581172e850dcS: abc2fef19988e6626243feff831bced36b83b642 192.168.1.60 All nodes agree about slots configuration 6379 slots: (0 slots) slave replicates c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8 [OK] All nodes agree about slots configuration. > > Check slots coverage... [OK] All 16384 slots covered.

Test the cluster:

[root@localhost /] # redis-cli-h 192.168.1.20-p 6379-c

192.168.1.20 6379 > set zhangsan 123123 # create a value

-> Redirected to slot [12767] located at 192.168.1.30 6379 # found him running on another server

OK

192.168.1.30 6379 > get zhangsan # can also be found

"123123"

[root@localhost src] #. / redis-trib.rb check 192.168.1.10 slots:10923 6379 # View Cluster status > Performing Cluster Check (using node 192.168.1.10 slots:10923 6379) M: 4234dad1a041a91401d6e635c800581172e850dc 192.168.1.10 slots:10923 6379 (5461 slots) master 1 additional replica (s) M: c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8 192.168.1.30 slots:10923-16383 (5461 slots) master 1 additional replica (s) M: b386e4089c6f45a59d371549cda306669dd6938f 192 .168.1.20: 6379 slots:5461-10922 (5462 slots) master 1 additional replica (s) S: f0a387bf5f366de0e25c575588349bd424a0ff90 192.168.1.50 additional replica 6379 slots: (0 slots) slave replicates b386e4089c6f45a59d371549cda306669dd6938fS: e772a17543efb1e11cd05d792c11319b0fbfee5f 192.168.1.40 additional replica 6379 slots: (0 slots) slave replicates 4234dad1a041a91401d6e635c800581172e850dcS: abc2fef19988e6626243feff831bced36b83b642 192.168.1.6015 6379 slots: (0 slots) slave replicates c1d6d14364e8c5db2ae1ea3ee07360a8b17127d8 [OK] All nodes agree about slots configuration. > Check for open slots... > > Check slots coverage... [OK] All 16384 slots covered.

The difference between redis-3.x.x and redis-5.x.x in creating a cluster:

Instead of using the command, redis-3.x.x uses the redis-trib.rb command with the following syntax:

[root@localhost ~] # redis-trib.rb create-- replicas 1 192.168.1.1 replicas 6379.. 192.168.1.6:6379#redis-3.x.x create a cluster. [root@localhost src] # redis-trib.rb check 192.168.1.1 redis-trib.rb check cluster status # redis-trib.rb cannot be used directly, you need to do the following to copy the script to local.. / bin directly using [root@localhost src] # cd / usr/src/redis-5.0.5/src/ [root@localhost src] # cp redis-trib.rb / usr/local/bin/ # for direct use. If not, you need to execute the file using ". /" in the directory

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report