In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge of "how to smooth data migration between two Redis clusters". Many people will encounter this dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
problem
Due to various reasons in the production environment, we need to migrate the existing servers, including what to do with the online running redis cluster environment?
When it comes to data source changes, how can the original data be smoothly migrated to the new instance, so that the seamless migration can be achieved?
Scheme summary is based on redis's own RDB/AOF backup mechanism.
Execute save\ bgsave trigger data persistence RDB file
Copy the redis backup file (dump.rdb) to the target machine
Restart the target instance and restart the load RDB file
About the difference between save/bgsave
Command savebgsaveIO blocking synchronous asynchronous complexity O (n) O (n) disadvantage blocking client needs fork, memory consumption is based on redis-dump import and export json backup
Redis-dump restore Redis data https://github.com/delano/redis-dump based on JSON backup
# Export command redis-dump-u 127.0.0.1 redis-dump 6379 > lengleng.json# export specified database data redis-dump-u 127.0.0.1 redis-dump 6379-d 15 > lengleng.json# if redis has password redis-dump-u: password@127.0.0.1:6379 > lengleng.json# import command
< lengleng.json redis-load# 指定redis密码< lengleng.json redis-load -u :password@127.0.0.1:6379基于 redis-shake 实现 redis-cluster 迁移 redis-shake是阿里云Redis&MongoDB团队开源的用于redis数据同步的工具https://github.com/alibaba/RedisShake。Create two clusters based on Docker
Docker run-- name redis-cluster1-e CLUSTER_ANNOUNCE_IP=192.168.0.31-p 8000-8005 pig4cloud/redis-cluster:4.0docker run 17000-7005-p 18000-18005 pig4cloud/redis-cluster:4.0docker run-- name redis-cluster2-e CLUSTER_ANNOUNCE_IP=192.168.0.31-p 8000-8005 Vera 7000-7005-p 18000-18005 docker run 17000-17005 pig4cloud/redis-cluster:4.0
Configure redis-shake.conf
Source.type: clustersource.address: master@192.168.0.31:7000 # configure a node auto-discovery target.type: clustertarget.address: master@192.168.0.31:8000 # configure a node auto-discovery
Perform full, incremental synchronization
Restful monitoring index
# users can view internal health through restful monitoring metrics. The default restful port is 9320: http://127.0.0.1:9320/metric
This is the end of "how to smooth data migration between two Redis clusters". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.