In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Execution of slaveof on slave to master was not successful.
Check the redis process log output of master as follows:
427 asks for synchronization S 03 May 20 asks for synchronization 32 asks for synchronization * Slave 10.9.95.71
427 slaveip:6379 for lack of backlog S 03 May 20 May 32 slaveip:6379 for lack of backlog 07.320 * Unable to partial resync with slave $).
427 S 03 May 20 32 07.320 * Delay next BGSAVE for SYNC
427 slaves sockets S 03 May 20 slaves sockets 32 13. 350 * slaves sockets
427 S 03 May 20 32 14.333 * Background RDB transfer started by pid 478
427 slaveip lost S 03 May 20 slaveip lost 35 slaveip lost 04.136 # slaveip lost.
427:S 03 May 20:35:04.912 # Background transfer error
Manually check the rdb file through stat to confirm that bgsave is successful.
The error report is also displayed as transfer error, which is roughly determined that the rdb file reported an error when it was sent from transfer to slave.
Google found that the client-output-buffer-limit setting should be too small.
0 client-output-buffer-limit
It is noted in the documentation that this parameter is used to limit the limits of the client buffer.
Redis's client is divided into three types: normal, slave, and pubsub.
Normal: normal client and includes MONITOR client
Slave:slave client
Pubsub:pub/sub client
The setting method is as follows:
Client-output-buffer-limit [class] [hard limit] [soft limit] [soft seconds]
[class] is the type of client.
[hard limit] is a hard limit, that is, buffer clients that exceed the [hard limit] value will be forcibly interrupted.
[soft limit] [soft seconds] is a soft limit, that is, within the duration of [soft seconds], buffer clients exceeding [soft limit] will be forcibly interrupted.
Of course, normal client is not restricted by default (set to 0), because if the data is received only after the request is pushed (without asking), then only the asynchronous client may create a case where the request for data exceeds its read speed.
Slave and pubsub client have a default limit because they receive data by push.
Here, the hard limit of slave is set to 32G.
Config set client-output-buffer-limit "normal 00 slave 34359738368 268435456 600600 pubsub 33554432 8388608 60"
After re-slaveof, observe the process log of info replication and main library redis, and create the replication successfully:
427rig S 03 May 21R 41R Slave $slaveip:6379 asks for synchronization
427rig S 03 May 21R 41R Full resync requested by slave $slaveip:6379
427R S 03 May 21R 41R R 16.984 * Delay next BGSAVE for SYNC
427 slaves sockets S 03 May 21 1V 22.712 * Starting BGSAVE for SYNC with target: slaves sockets
427 S 03 May 21 1 23.679 * Background RDB transfer started by pid 568
568 xxxxx MB of memory used by copy-on-write C 03 May 21 14. 107 * RDB: xxxxx MB of memory used by copy-on-write
427 S 03 May 21 51R 15.012 * Background RDB transfer terminated with success
427 slaveip:6379 correctly received the streamed RDB file S 03 May 21 slaveip:6379 correctly received the streamed RDB file 51 15. 012 # slaveip:6379 correctly received the streamed RDB file.
427 slaveip:6379 succeeded S 03 May 21 1 slaveip:6379 succeeded 15.012 * Streamed RDB transfer with slave (socket). Waiting for REPLCONF ACK from slave to enable streaming
After the replication is established, change the client-output-buffer-limit to the default value.
Author's official account of Wechat:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 201
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.