In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The problem of redis master-slave replication synchronous data endless cycle
Discover the phenomenon:
Recently, there is a question and answer codis A slave library port 6504 has been from time to time, the master-slave delay and slave library can not connect to the alarm, began to suspect that redis backup caused, did not go deep to find the cause, later found that the police during the day, so in-depth investigation of the reason.
Host: 10.20.1.4
Port: 6504
Idc: KDDI
Role: redis_s
Item: r_replication
Current: 32767
Last: 1 minutes.
Info:
Send at [2015-11-27 09:17:49]
-
Host: 10.20.1.4
Port: 6504
Idc: KDDI
Role: redis_s
Item: r_connection
Current: 0
Last: 1 minutes.
Info: failed
Send at [2015-11-27 08:13:46]
Problem phenomenon:
1. Log in to the server where the slave library is located and find that the temp-rewriteaof-xxx.aof file is generated periodically from the library, as shown in the following figure
2. A large number of "Connection with master lost" logs appear in the log of the slave database, which means that the connection to the main library is lost, as shown in the following figure.
3. Log in to the main library and check the log and find "Connection with slave 10.20.1.4 scheduled to be closed ASAP for overcoming of output buffer limits 6504 lost" and "scheduled to be closed ASAP for overcoming of output buffer limits." Two important pieces of information
4. Log in to the main library and use the info command to discover "slave0:ip=10.20.1.4,port=6504,state=send_bulk,offset=0,lag=0"
Problem analysis:
1. By periodically generating aof files from the library and seeing periodic connection to the master library lost in the log, we initially suspect that it is a problem at the replication level, and look at the logs of several other slave libraries and do not find this situation, so first eliminate the network problem.
2. The phenomenon of connection slave library lost was also found in the master library, which was determined to be an internal problem of port 6504 redis, and saw "overcoming of output buffer limits", indicating that buffer was restricted.
3. The above information can basically confirm the cause of the problem. Recall the principle of redis master-slave replication first: after executing the slaveof ip port command from the slave library, the master library will use bgsave to generate a rdb snapshot file, and then transfer the file to the slave library over the network. At the same time, the master library will write a buffer buffer of the new data from the moment the rdb snapshot is generated, on the other hand After the library accepts that the main library has just generated the rdb file, it will take a certain time to load the rdb file. If the longer the time, the greater the amount of writing to the main library, then the buffer just generated by the main library will be larger (of course, not infinitely large). Its size is set in the client-output-buffer-limit parameter slave 268435456 67108864 60 of the main library, which means that if the size of buffer exceeds 256Mb or the buffer generated for 60 consecutive seconds is larger than 64Mb. Then buffer forcibly shuts down
Solution:
1. Adjust the default parameter of the main library client-output-buffer-limit, CONFIG SET client-output-buffer-limit "slave 1073741824 268435456", set its limit to 1G, and force it to close after 268435456 seconds of 256Mb. The problem is solved.
You can see the following phenomena in the log of the main database:
In the log from the library, you can see the following phenomena:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.