In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Today, I will talk to you about the principle of Redis master-slave replication, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
The role of master-slave replication
Provide multiple copies of data for high availability
Achieve read-write separation (the master node is responsible for writing data, the slave node is responsible for reading data, and the master node periodically synchronizes the data to the slave node to ensure data consistency)
The way of master-slave replication
Command slaveof.
Pros: no reboot is required. Disadvantages: not easy to manage
/ / the command line uses slaveof ip port / / after using the command, its own data will be emptied, but canceling slave will only stop replication, not empty it.
Modify the configuration.
Advantages: unified configuration. Cons: need to restart
/ / configure slaveof ip portslave-read-only yes / / only allow full replication of read operations from nodes in the configuration file
For initial replication or other situations where partial replication cannot be performed, sending all the data in the master node to the slave node is a very heavy operation. When the amount of data is large, it will cause a lot of overhead to the master node and the network.
Full replication process:
A synchronization command is issued inside the Redis. At first, it is a Psync command. Psync?-1 means that the master host is required to synchronize data.
The host sends run_id and offset to the slave, because slave does not have a corresponding offset, so it is a full copy.
The slave slave saves the basic information of the host master
After receiving the full copy command, the master node executes bgsave (asynchronous execution), generates a RDB file (snapshot) in the background, and uses a buffer (called copy buffer) to record all write commands executed from now on.
The host sends RDB files to the slave
Send buffer data
Refresh old data. The slave node clears the old data before loading the data of the master node.
The load RDB file updates the database state to the database state and the loading of buffer data when the master node executes bgsave.
Full replication overhead
The primary node requires bgsave
Network transfer of RDB files occupies network io
Data to be emptied from the slave node
Load RDB from a node
Full replication triggers AOF rewriting from the slave node
Partial replication
Partial replication occurs after Redis 2.8.It is used to deal with data loss scenarios caused by network flash breaks in master-slave replication. When the slave node is connected to the master node again, if conditions permit, the master node will reissue the lost data to the slave node. Because the reissued data is far less than the full data, the excessive overhead of full replication can be effectively avoided. it should be noted that if the network outage time is too long, the master node will not be able to completely save the write commands executed during the interruption. Partial replication cannot be carried out, but full replication is still used.
Partial replication process:
If the network jitter (disconnect connection lost)
Host master still writes repl_back_buffer (copy buffer)
The slave slave will continue to try to connect to the host
The slave slave transfers its current run_id and offset to the host master and executes the pysnc command to synchronize
If master finds that your offset is within the range of the buffer, it will return the continue command
Part of the data of offset is synchronized, so the basis of partial replication is offset offset.
The server runs ID (run_id): each Redis node (regardless of master and slave) automatically generates a random ID (different from each startup) at startup, consisting of 40 random hexadecimal characters; run_id is used to uniquely identify a Redis node. With the info server command, you can view the run_id of the node.
Common problems in development, operation and maintenance
Separation of reading and writing
There is a delay in replicating data (if blocking occurs from the slave node)
Slave node may fail
Master-slave configuration is inconsistent
For example, inconsistent maxmemory may result in data loss.
For example, data structure optimization parameters are inconsistent: resulting in inconsistency between master and slave memory.
Avoid full replication
The first full copy is inevitable, so the maxmemory of the fragment is reduced, and the full copy is selected at the low peak (night).
Insufficient replication backlog buffer increases replication buffer configuration rel_backlog_size
For example, if the average time of a network outage is 60s and the average number of bytes of write commands (specific protocol format) generated by the primary node per second is 100KB, the average requirement for the replication backlog buffer is 6MB, which, to be safe, can be set to 12MB to ensure that partial replication can be used in most cases of disconnection.
Copy the storm master node to restart, and the master node generates a rdb file, but sends the rdb file to all slave nodes. Put a lot of pressure on cpu, memory and bandwidth
After reading the above, do you have any further understanding of the principle of Redis master-slave replication? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 229
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.