Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed introduction of three modes of Redis master-slave replication, sentry and Cluster

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "Redis master-slave replication, sentinel, Cluster three modes detailed introduction", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's ideas slowly in depth, together to study and learn "Redis master-slave replication, sentinel, Cluster three modes detailed introduction" bar!

Overview

As an efficient middleware for caching, Redis is frequently used in our daily development. Today, we will talk about the four modes of Redis, namely, stand-alone, master-slave replication, sentry, and cluster mode.

It is possible that programmers in general companies can basically solve the problem by using a stand-alone version. The data given on the official website of Redis is 10W QPS, which is more than enough to cope with ordinary companies. If not, there is a master-slave mode to achieve the separation of reading and writing, and the performance is greatly improved.

However, as aspiring programmers, it is not possible for us to limit ourselves to stand-alone and master-slave crud, or at least understand the principles of "sentry" and "cluster mode" so that we can argue with the interviewer during the interview.

Many articles have been written on Redis before, such as: "basic data types and underlying implementation principles of Redis, transactions, persistence, distributed locks, subscription pre-release", etc., can be said to be a more comprehensive tutorial, this article is basically complete, I will organize the article system into pdf, share with you.

Let's start with a sorted out of the Redis outline. There may be some incompleteness. If there is any incompleteness, you can add it in the message area, and I will add it later.

Single machine

The stand-alone version of Redis is relatively simple. Almost 90% of programmers have used it. The third-party dependency library recommended for operating Redis on the official website is Jedis. In SpringBoot projects, you can use it directly by introducing the following dependencies:

Redis.clients

Jedis

${jedis.version}

Advantages

Stand-alone Redis also has many advantages, such as simple implementation, simple maintenance, simple deployment, very low maintenance cost, and no other additional expenses.

Shortcoming

However, because it is a stand-alone version of Redis, there are many problems, such as the most obvious single point of failure problem, a Redis failed, all the requests will be directly typed on the DB.

And the number of anti-concurrency of a Redis is also limited, and both read and write requests should be taken into account at the same time. As long as the number of visits comes up, Redis cannot stand it. On the other hand, the data storage of the stand-alone version of Redis is also limited. When the amount of data is large, it will be very slow to restart Redis, so the limitation is relatively large.

Practical construction

Stand-alone version of the building tutorials, there are many comprehensive tutorials on the Internet, basically stupid operation, especially if you build locally, the basic use of yum is fast and convenient, a few commands will be done, here recommend a building tutorial: https://www.cnblogs.com/ zuidongfeng/p/8032505.html.

The above tutorial is very detailed. The construction of the environment is originally the work of operation and maintenance, but it is necessary for programmers to try to build the environment themselves, and this kind of thing is basically built once and for all. Maybe the next time you change the computer or reinstall the virtual machine, you will build it again.

Here also release the configuration items of redis.conf commonly used by redis, with notes to see if I am a very warm man:

Daemonize yes / / sets background startup, and generally sets yes

When pidfile / var/run/redis.pid / / edis runs as a daemon, redis writes pid to the / var/run/redis.pid file by default

Port 6379 / / default port is 6379

Bind 127.0.0.1 / / host address. Setting 0.0.0.0 means all are accessible. 127.0.0.1 means that only local access is allowed

Timeout 900 / / how long after the client has been idle for how long, the connection is closed. If specified as 0, the function is disabled.

Logfile stdout / / logging method. Default is standard output.

Logfile ". / redis7001.log" # indicates the log file name

Databases 16 / / sets the number of databases. Default database is 0.

Save / / synchronize the data to the data file as many updates are performed

Three conditions are provided in the Redis default configuration file:

1 change in save 9001 / 900s (15 minutes)

10 changes in save 30010 / 300s (5 minutes)

Save 60 10000 / / 10000 changes in 60 seconds

Rdbcompression yes / / specifies whether to compress data when stored in the local database

Dbfilename dump.rdb / / specify the local database file name

Dir. / specify the local database storage directory

Slaveof / / master-slave synchronization settings, setting the ip and port of the master database

# if non-zero, set the SO_KEEPALIVE option to send ACK to clients with idle connections

Tcp-keepalive 60

# by default, if you open a RDB snapshot (at least one save instruction) and the latest backend save fails, Redis will stop accepting write operations

# this will let the user know that the data is not correctly persisted to the hard disk, otherwise no one may notice and cause some disaster

Stop-writes-on-bgsave-error yes

# by default, if the RDB snapshot is enabled (at least one save instruction) and the latest backend save fails, Redis will stop accepting write operations.

Stop-writes-on-bgsave-error yes

# whether to use LZF to compress string objects when exporting to .rdb database

Rdbcompression yes

# version 5 of RDB has a checksum of the CRC64 algorithm at the end of the file. This will make the file format more reliable.

Rdbchecksum yes

# File name of persistent database

Dbfilename dump-master.rdb

# working directory

Dir / usr/local/redis-4.0.8/redis_master/

# password of slav service connection to master

Masterauth testmaster123

# when a slave loses its connection to master, or synchronization is in progress, slave can behave in two ways:

# 1) if slave-serve-stale-data is set to "yes" (the default), slave will continue to respond to client requests, which may be normal data, outdated data, or empty data that has not yet obtained a value.

# 2) if slave-serve-stale-data is set to "no", slave will reply "synchronizing from master"

# (SYNC with master in progress) "to handle various requests, except for the INFO and SLAVEOF commands.

Slave-serve-stale-data yes

# configure whether to read only

Slave-read-only yes

# if you choose "yes" Redis will use less TCP packets and bandwidth to send data to slaves. But this will delay the transfer of data to slave, and the default configuration of the Linux kernel will reach 40 milliseconds.

# if you choose "no", the delay of data transfer to salve will be reduced, but more bandwidth will be used

Repl-disable-tcp-nodelay no

# priority of slave. Salve with a low priority number will be given priority to master.

Slave-priority 100

# password authentication

Requirepass testmaster123

# the maximum memory consumption of a redis instance is reached. Once the memory usage reaches the upper limit, Redis will use the selected recycling policy (see:

# maxmemmory-policy) Delete key

Maxmemory 3gb

# maximum memory policy: how does Redis choose to delete key if the memory limit is reached.

# volatile-lru-> delete the key with expiration time according to the LRU algorithm.

# allkeys-lru-> Delete any key according to the LRU algorithm.

# volatile-random-> randomly delete the key according to the expiration setting, and the key with expiration time.

# allkeys- > random-> randomly delete any key without difference.

# volatile-ttl-> Delete based on the most recent expiration time (supplemented by TTL), which is for key with expiration time

# noeviction-> No one deletes it, and an error is returned during the write operation.

Maxmemory-policy volatile-lru

# enable AOF

Appendonly no

# aof file name

Appendfilename "appendonly.aof"

The # fsync () system call tells the operating system to write data to disk instead of waiting for more data to enter the output buffer.

Some operating systems will actually brush the data onto disk immediately; others will try to do so as soon as possible.

# Redis supports three different modes:

# no: don't brush it right away, only when the operating system needs it. It's faster.

# always: every write operation is immediately written to the aof file. Slow, but safest.

# everysec: write once per second. A compromise.

Appendfsync everysec

# if the synchronization policy of AOF is set to "always" or "everysec", and the background storage process (background storage or writing to AOF)

(# log) will incur a lot of disk Igamo overhead. Some Linux configurations can cause Redis to block for a long time because of the fsync () system call.

# Note that there is currently no perfect fix for this situation, and even fsync () from different threads will block our synchronized write (2) calls.

# to alleviate this problem, you can use the following option. It can block the main process from fsync () during BGSAVE or BGREWRITEAOF processing.

This means that if a child process is performing a save operation, then Redis is in an "unsynchronized" state.

# this actually means that 30 seconds of log data may be lost in the worst case. (default Linux setting)

# if you have a delay problem, set this to "yes", otherwise keep "no", which is the safest way to keep persistent data.

No-appendfsync-on-rewrite yes

# automatically rewrite AOF files

Auto-aof-rewrite-percentage 100

Auto-aof-rewrite-min-size 64mb

# the AOF file may be incomplete at the end (this is a problem with system shutting down, especially when the mount ext4 file system is closed

# No data=ordered option added. It only happens when os dies, and redis's own death is not incomplete.

# then there will be a problem when load enters memory when redis is restarted.

# when it occurs, you can choose redis to start the error report, notify the user and write logs, or load as much normal data as possible.

# if aof-load-truncated is yes, it automatically publishes a log to the client and then load (default).

# in the case of no, the user must manually redis-check-aof to repair the AOF file.

# Note: if the aof is found to be corrupted during the reading process, the server will exit.

# this option is only used when the server tries to read more data but cannot find the corresponding data.

Aof-load-truncated yes

# maximum execution time of Lua script (in milliseconds)

Lua-time-limit 5000

# Redis slow query log can record queries that exceed a specified time

Slowlog-log-slower-than 10000

There is no limit to this length. It's just that it mainly consumes memory. You can reclaim memory through SLOWLOG RESET.

Slowlog-max-len 128

# restrictions on the client's output buffer, which can be used to force disconnect clients that are not fast enough to read data from the server for some reason

Client-output-buffer-limit normal 0 0 0

Client-output-buffer-limit slave 256mb 64mb 60

Client-output-buffer-limit pubsub 32mb 8mb 60

# when a child process rewrites an AOF file, the file will be synchronized every time 32m data is generated

Aof-rewrite-incremental-fsync yes

Because, when the concurrency of the stand-alone version of Redis is relatively large, and when it needs higher performance and reliability, the stand-alone version is basically not suitable, so there is a "master-slave mode".

Principle of master-slave mode

The principle of master and slave is relatively simple, "the master database (master) can be read or written (read/write), and the slave database can only be read (only read)".

However, the master-slave mode generally achieves "read-write separation" and "master database only writes (only write)" to reduce the pressure on the master database. The following figure understands the principle of the master-slave mode:

The principle of master-slave mode is that simple, so what is the process (working mechanism) of his execution? Here's another picture:

When the master-slave mode is turned on, its specific working mechanism is as follows:

When slave starts, it sends a SYNC command to master. After receiving the command from the database, the master node saves the snapshot through bgsave ("RDB persistence"), and some commands executed during the period are cached. Master then sends the saved snapshot to slave and continues the write command during the cache. When slave receives a snapshot sent by the master database, it is loaded into its own database. Finally, master says that the cached commands are synchronized to slave,slave and executed once the command is received, so that the master and slave data are consistent. Advantages

The reason for the use of master-slave is that the master solves the problem of large concurrency of stand-alone version, which leads to request delay or redis downtime service stop to a certain extent.

Share the read pressure of the master database from the database, if the master database is write-only mode, then the separation of read and write will be achieved, and the master database will have no read pressure.

On the other hand, the problem of single point failure of the stand-alone version is solved. If the master database is down, then the database can be topped up at any time. To sum up, the master-slave mode improves the availability and performance of the system to a certain extent, which is the basis for the realization of sentinels and clusters.

Master-slave synchronization is synchronized asynchronously, during which Redis can still respond to queries and update requests submitted by the client.

Shortcoming

The master-slave mode is good or not, and it also has its own shortcomings, such as the consistency of the data. if the write operation of the master database is completed, then his data will be copied to the slave database. If it has not been copied to the slave database, the read request comes again, and the data read at this time is not the latest data.

If the master synchronization process network fails, resulting in master-slave synchronization failure, there will also be a problem of data consistency.

The master-slave mode does not have the function of automatic fault tolerance and recovery. once the master database, the process of promoting the slave node to the master database requires human operation, the cost of maintenance will rise, and the write capacity and storage capacity of the master node will be limited.

Practical construction

Let's build the master-slave mode in practice. The master-slave mode is relatively simple. I have a centos 7 virtual machine here, which uses the method of enabling multiple instances of redis to build the master-slave mode.

To enable multiple instances in redis, first create a folder to store the configuration files of the redis cluster:

Mkdir redis

Then paste and copy the redis.conf configuration file:

Cp / root/redis-4.0.6/redis.conf / root/redis/redis-6379.conf

Cp / root/redis-4.0.6/redis.conf / root/redis/redis-6380.conf

Cp / root/redis-4.0.6/redis.conf / root/redis/redis-6381.conf

Copy three configuration files, one master and two slaves, port 6379 as master database (master) and 6380 and 6381 as slave database (slave).

The first is to configure the configuration file of the main database: vi redis-6379.conf:

Bind 0.0.0.0 # commented out or configured to 0.0.0.0 means that any IP can be accessed.

Protected-mode no # turns off protected mode and uses password access.

Port 6379 # sets the port, 6380 and 6381 are 6380 and 6381, respectively.

Timeout 30 # client disconnects after how long the connection is idle (in seconds), 0 means disabled

Daemonize yes # runs in the background

Pidfile / var/run/redis_6379.pid # pid process filename, 6380, 6381 are redis_6380.pid, redis_6381.pid

Logfile / root/reids/log/6379.log # log files, 6380 and 6381 are 6380.log and 6381.log, respectively

At least one write operation within 1 # 900s of save 900s executes bgsave for RDB persistence

Save 300 10

Save 60 10000

Whether rdbcompression yes # compresses the RDB file. It is recommended to set it to no and exchange (disk) space for (CPU) time.

Dbfilename dump.rdb # RDB file name

Dir / root/redis/datas # RDB file save path, AOF file is also saved here

Appendonly yes # indicates the way to use AOF incremental persistence

Appendfsync everysec # available values always, everysec,no. It is recommended to set it to everysec.

Requirepass 123456 # set password

Then, modify the configuration file of the slave database and add the following configuration information to the configuration file of the slave database:

Slaveof 127.0.0.1 6379 # configure ip,port for master

Masterauth 123456 # configure the password to access master

Slaveof-serve-stale-data no

The next step is to start three redis instances. Start the command, first cd to the src directory of redis, and then execute:

. / redis-server / root/redis/6379.conf

. / redis-server / root/redis/6380.conf

. / redis-server / root/redis/6381.conf

Use the command ps-aux | grep redis to view the started redis process:

As shown in the figure above, it indicates that the startup is successful, and then the testing phase begins.

test

I use SecureCRT as the client of redis connection, start three SecureCRT at the same time, connect to three instances of redis1 respectively, and specify the port and password when starting:

. / redis-cli-p 6379-a 123456

After startup, in master (6379), enter: set name 'ldc', in slave via get name, you can view:

Data synchronization is successful. There are several pitfalls. One is that the bind is not set in the redis.conf, which will cause non-native ip to be filtered out. The general configuration is 0.0.0.0.

The other is that the password requirepass 123456 is not configured, which will cause the IO to connect abnormally all the time. This is the pit I encountered. After configuring the password, I succeeded.

In addition, if you look at the startup log of redis, you can find that there are two warning, although it does not affect the master-slave synchronization of the builder, which seems annoying, but some people will encounter it, and some people will not.

However, I am a person with obsessive-compulsive disorder. Baidu also has a solution. I won't talk about it here and leave it to you to solve it. I'm just telling you that there is this problem, and some people don't even look at the log. I think everything is all right when I see the start-up success, and I don't read the log, which is not a good habit.

Sentinel mode principle

The Sentinel mode is an upgraded version of the master-slave mode, because the master-slave mode will not automatically recover after a failure and requires human intervention, which is very painful.

On the basis of master and slave, the realization of Sentinel mode is to monitor the operation status of master and slave, to monitor the robustness of master and slave, just like Sentinel, to issue a warning whenever there is an anomaly and deal with the abnormal situation.

So, generally speaking, the Sentinel model has the following advantages (function points):

"Monitoring": monitor whether the master and slave are working properly, and the Sentinels will also monitor each other for "automatic failure recovery": when the master fails, a slave will be automatically elected as the master.

The monitoring configuration information for Sentinel mode is specified by configuring the sentinel monitor of the slave database, such as:

/ / mymaster means that a name is defined for the master database, followed by the ip and port of master. 1 means that at least one Sentinel process is required to agree to determine that master is invalid. If this condition is not met, automatic failover (failover) will not be performed.

Sentinel monitor mymaster 127.0.0.1 6379 1

Node communication

Of course, there are other configuration information, which will be discussed when the environment is set up. When the Sentinel starts, a connection is established with master to subscribe to master's _ sentinel_:hello channel.

This channel is used to obtain information about other sentinels monitoring the master. A connection is also established that periodically sends INFO commands to master to obtain master information.

"when the Sentinel establishes a connection with master, it periodically sends INFO commands to master and slave (once every 10 seconds). If master is marked as subjectively offline, the frequency becomes once per second. "

And regularly send their own messages to the _ sentinel_:hello channel so that other Sentinels can subscribe to their own information, including "Sentinel's ip and port, running id, configuration version, master name, ip port of master, and configuration version of master" and so on.

And, "periodically send PING commands to master, slave, and other sentinels (once per second) to check whether the object is alive". If the other party receives the PING command, it will reply to the PONG command if there is no failure.

Therefore, Sentinel realizes the communication between Sentinel and Sentinel, Sentinel and master by establishing these two connections and sending INFO and PING commands regularly.

Some concepts need to be understood here, such as INFO, PING, PONG commands, followed by MEET, FAIL commands, subjective referrals and, of course, objective referrals. Here we mainly talk about the understanding of these concepts:

INFO: this command can get the latest information of the master-slave database and can realize the discovery of new nodes PING: this command is used most frequently, and it encapsulates the state data of its own nodes and other nodes. PONG: when a node receives MEET and PING, it will reply to the PONG command and send its own status to each other. MEET: when a new node joins the cluster, it sends the command to the old node, indicating that it is a newcomer. FAIL: when the node goes offline, it broadcasts the message to the cluster. Online and offline

When the sentry is the same as the master, he will keep in touch regularly. If the PING sent by the sentry does not receive a reply within a specified period of time (sentinel down-after-milliseconds master-name milliseconds configuration), the sentry who sends the PING command will think that the master is "subjectively offline" (Subjectively Down).

Because it is possible that the network problem between the Sentinel and the master is caused, rather than the master itself, the Sentinel will also ask other sentinels if they also think that the master is offline, and if they think that a certain number of Sentinels have been offline on this node ("previous quorum field configuration"), they will consider the node to be "objectively offline" (Objectively Down).

If there is not a sufficient number of sentinel to agree to the master offline, the objective offline logo of the master will be removed; if the master replies to the Sentinel's PING command, the objective offline logo will also be removed.

Election algorithm

When master is considered to be offline objectively, how does it recover from failure? The original sentry first elected a boss sentry for fault recovery, the algorithm for electing the boss sentry is called "Raft algorithm":

The sentinelA who discovers that master is offline will send orders to other sentinels to canvass votes, demanding that they choose themselves as Sentinel bosses. If the target sentinelA does not choose another Sentinel, he will choose the Sentinel as the boss. If you choose more than half of the sentinels of sentinelA (half principle), the boss must be sentinelA. If there are multiple sentinels running at the same time, and there may be a unanimous number of votes, they will wait for the next random time to launch another campaign request for a new round of voting until the boss is elected.

After the Boss Sentinel is selected, the Boss Sentinel will automatically reply to the failure and select a slave from slave as the main database. The election rules are as follows:

Of all the slave, the one with the highest slave-priority priority is selected. If the priority is the same, the one with the largest offset is selected, because the offset records the increment of the replication of the data, and the larger the data, the more complete the data. If both of the above are the same, choose the one with the smallest ID.

Through the above layers of filtering, the failure recovery is finally achieved. The selected slave is promoted to master, and other slave will copy data to the new master. If the down lost master comes back online, it will be run as a slave role.

Advantages

Sentinel mode is an upgraded version of master-slave mode, so it improves the availability, performance and stability of the system at the system level. When the master is down, it can recover automatically without human intervention.

Between Sentinel and Sentinel, between Sentinel and master, timely monitoring, heartbeat detection, timely detection of system problems, which make up for the shortcomings of the master and slave.

Shortcoming

The one-master and multi-slave mode of Sentinel will also encounter the bottleneck of writing, which is already the bottleneck of storage. If the master goes down and the failure recovery time is long, the writing business will be affected.

The addition of Sentinel also increases the complexity of the system, which requires the maintenance of Sentinel mode at the same time.

Practical construction

Finally, let's build the Sentinel mode. It is relatively simple to configure the Sentinel mode. On the basis of the master-slave mode configured above, create a folder to store the configuration files of the three Sentinels:

Mkdir / root/redis-4.0.6/sentinel.conf / root/redis/sentinel/sentinel1.conf

Mkdir / root/redis-4.0.6/sentinel.conf / root/redis/sentinel/sentinel2.conf

Mkdir / root/redis-4.0.6/sentinel.conf / root/redis/sentinel/sentinel3.conf

Add the following configuration to these three files:

Daemonize yes # runs in the background

Sentinel monitor mymaster 127.0.0.1 6379 1 # give master a name mymaster and configure the ip and port of master

Password for sentinel auth-pass mymaster 123456 # master

Port 26379 # the other two are configured with 36379Magi 46379 port

Sentinel down-after-milliseconds mymaster 3000 # 3s thinks that master is subjectively offline without replying to PING.

When sentinel parallel-syncs mymaster 2 # performs a failover, there can be up to 2 slave instances synchronizing new master instances

Sentinel failover-timeout mymaster 100000 # fail to complete the failover operation within 10 seconds is considered to have failed

After configuration, start three Sentinels:

. / redis-server sentinel1.conf-- sentinel

. / redis-server sentinel2.conf-- sentinel

. / redis-server sentinel3.conf-- sentinel

Then check through: ps-aux | grep redis: you can see that three redis instances and three sentinels have been started normally. Log in to 6379 and view master information through INFO Repliaction:

The current master is 6379, then let's test the automatic failure recovery of the Sentinel, directly kill the 6379 processes, and then check the master information again by logging in:

You can see that the current 6380 character is master, and 6380 is readable and writable, not read-only mode, which shows that our Sentinel is working, successfully built, interested can build their own, or maybe you will step on a bunch of holes.

Cluster mode

Finally, Cluster is a real cluster mode, and the sentry solves the problem that the master can never recover automatically, but at the same time, it is difficult to expand capacity and the storage, read and write capacity of a single machine is limited. In addition, the cluster used to be a redis with full data, so that all redis are redundant, which will greatly consume memory space.

The cluster mode realizes the distributed storage of Redis data, the fragmentation of data, each redis node stores different content, and solves the problems of online node contraction (offline) and capacity expansion (online).

The cluster mode really realizes the high availability and high performance of the system, but at the same time, the cluster further makes the system more and more complex, so let's understand the operation principle of the cluster in detail.

Principle of data partitioning

The schematic diagram of the cluster is easy to understand. The virtual slot partitioning algorithm is used in the Redis cluster, which divides the redis cluster into 16384 slots (0-16383).

For example, the three master shown in the following figure may divide the slots in the range of 0-16383 into three parts (0-5000), (5001-11000) and (11001-16383) respectively.

When the client requests, it will first perform CRC16 check on the key and calculate the slot where the key is located on the 16384 module (CRC16 (key) 383), and then go to the corresponding slot to fetch or store the data, so as to realize the data access update.

The reason for slotted storage is to slice a whole pile of data to prevent the excessive amount of redis data from a single station and affect the performance.

Node communication

The data is stored in pieces between nodes, so how do nodes communicate with each other? This is basically the same command as the previous Sentinel mode.

First, the newly launched node will send a Meet message to the old member through the Gossip protocol, indicating that it is a new member.

After receiving the Meet message, the old members will resume the PONG message without failure, saying that they welcome the new node to join. Except after sending the Meet message for the first time, they will send regular PING messages to realize the communication between nodes.

In the process of communication, a tcp channel is opened for each communicating node, followed by a timing task, constantly sending PING messages to other nodes, in order to understand the metadata storage between nodes, as well as health status, so that even if problems are found.

Data request

The slot information is mentioned above, and an array of unsigned char MyLots [cluster _ SLOTS/8] is maintained at the bottom of the Redis to store the slot information for each node.

Because it is a binary array, only 0 and 1 values are stored, as shown in the following figure:

In this way, the array only indicates whether it stores the corresponding slot data. If 1 indicates that the data exists, 0 means that the data does not exist, so the query efficiency will be very high, similar to Bloom filter and binary storage.

For example, cluster node 1 is responsible for storing 0-5000 slot data, but at this time, only 0, 1 and 2 store data, and other slots do not store data yet, so the corresponding values of 0, 1 and 2 are 1.

In addition, each redis also maintains a clusterNode array with a size of 16384, which is used to store the ip, port and other information of the nodes responsible for the corresponding slots, so that each node maintains the metadata information of other nodes, so that it is easy to find the corresponding nodes in time.

When the new node joins or the node shrinks, communicate through the PING command to update the metadata information in the clusterNode array in time, so that the corresponding node can be found in time when there is a request.

In two other cases, if a request is made to find that the data is migrated, such as the addition of a new node, the data of the old cache node will be migrated to the new node.

Request to find that the old node has undergone data migration and the data has been migrated to the new node, because each node has clusterNode information, through the ip and port of this information. At this point, the old node will send a MOVED redirect request to the client, indicating that the data has been migrated to the new node. You need to access the ip and port of the new node to get the data, so you can get the data again.

What if you are making a positive data migration? The old node sends an ASK redirect request to the client and returns it to the ip and port of the target node migrated by the client, so that the data can also be obtained.

Expansion and contraction

Expansion and contraction are the online and offline of the node, which may cause a failure of the node and automatically reply to the failure (node contraction).

When the node shrinks and expands, the slot range of each node will be recalculated, and the corresponding data will be updated to the corresponding node according to the virtual slot algorithm.

Also mentioned above, the newly joined node will first send a Meet message. You can check out the above content in detail, basically the same pattern.

And after the failure, the election of the sentry boss node, the re-election of the master node, and how slave is promoted to the master node, you can see the front Sentinel mode election process.

Advantages

The cluster mode is a non-central architecture mode, which divides the data into pieces and distributes them to the corresponding slots. Each node stores different data content, and the corresponding node is responsible for storing slots through routing, which can achieve efficient query.

And the cluster mode increases the horizontal and vertical expansion ability to realize the node joining and shrinking. The cluster mode is the upgraded version of the Sentinel, and the Sentinel has the advantages of the cluster.

Shortcoming

The biggest problem of caching is to bring the problem of data consistency. When balancing the problem of data consistency, taking into account both performance and business requirements, most of them are solved by the final consistency solution, rather than strong consistency.

And the cluster mode brings a sharp increase in the number of nodes, a cluster mode needs at least 6 machines, because to meet the half principle of the election, it also brings the complexity of the architecture.

Slave only acts as a cold backup and does not relieve the pressure on master reading.

Practical construction

The deployment of cluster mode is relatively simple, as long as the following configuration information is added in redis.conf:

Port 637The six node ports in this example are 6379, 6380, 6381, 6382, 6383, and 6384, respectively.

Daemonize yes # r running in the background

Pidfile / var/run/redis_6379.pid # corresponds to 6379, 6380, 6381, 6382, 6383 and 6384 respectively

Cluster-enabled yes # enable cluster mode

Masterauth 12345 password if the password is set, you need to specify the master password

Configuration file of cluster-config-file nodes_6379.conf # cluster, which also corresponds to six nodes: 6379, 6380, 6381, 6382, 6383, 6384

Cluster-node-timeout 10000 # request timeout

Open the six instances at the same time, and run the six instances in a cluster with the following command

. / redis-cli-- cluster create-- cluster-replicas 1 127.0.0.1 cluster-replicas 6379 127.0.1 cluster-replicas 6380 127.0.1

Thank you for your reading. The above is the content of "detailed introduction of the three modes of Redis master-slave replication, sentry and Cluster". After the study of this article, I believe you have a more profound understanding of the detailed introduction of the three modes of Redis master-slave replication, sentinel and Cluster, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report