In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Background:
When we generally configure Mongodb master-slave, or Mongodb replication sets, data synchronization is real-time. However, if the wrong data operation is carried out on the primary node, it will cause the data of the whole cluster to go wrong. Therefore, we can pick a mongodb instance in a cluster and use it as a replication delay. When there is a misoperation on the primary node, one instance in the cluster is not affected. At this point, you can use this unaffected instance for data recovery.
The above is the function of the delayed replication node of mongodb. When the master node performs a data operation, the delayed replication node does not synchronize the data immediately, but synchronizes the data after a period of time.
Configuration:
Taking my lab environment as an example, here is my mongodb replication set:
Cmh0:PRIMARY > rs.status () {"set": "cmh0", "date": ISODate ("2016-08-22T02:43:16.240Z"), "myState": 1, "members": [{"_ id": 1, "name": "192.168.52.128 cmh0 27017" "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 82, "optime": Timestamp (1470581983, 1) OptimeDate: ISODate ("2016-08-07T14:59:43Z"), "electionTime": Timestamp (1471833721, 1), "electionDate": ISODate ("2016-08-22T02:42:01Z"), "configVersion": 1 "self": true}, {"_ id": 2, "name": "192.168.52.135id 27017", "health": 1, "state": 2 StateStr: "SECONDARY", "uptime": 71, "optime": Timestamp (1470581983, 1), "optimeDate": ISODate ("2016-08-07T14:59:43Z"), "lastHeartbeat": ISODate ("2016-08-22T02:43:15.138Z") "lastHeartbeatRecv": ISODate ("2016-08-22T02:43:14.978Z"), "pingMs": 0, "lastHeartbeatMessage": "could not find member to sync from", "configVersion": 1} {"_ id": 3, "name": "192.168.52.135 state 27019", "health": 1, "state": 2, "stateStr": "SECONDARY" Uptime: 75, optime: Timestamp (1470581983, 1), optimeDate: ISODate ("2016-08-07T14:59:43Z"), "lastHeartbeat": ISODate ("2016-08-22T02:43:15.138Z") "lastHeartbeatRecv": ISODate ("2016-08-22T02:43:15.138Z"), "pingMs": 0, "configVersion": 1}], "ok": 1}
At this time, the delayed replication node has not been configured, so the data is synchronized in real time:
Cmh0:PRIMARY > use cmhtestswitched to db cmhtestcmh0:PRIMARY > db.cmh.insert ({"name": "ChenMinghui"}) WriteResult ({"nInserted": 1}) cmh0:PRIMARY > rs.printReplicationInfo () configured oplog size: 990MBlog length start to end: 195secs (0.05hrs) oplog first event time: Mon Aug 22 2016 10:51:22 GMT+0800 (CST) oplog last event time: Mon Aug 22 2016 10:54:37 GMT+0800 (CST) now: Mon Aug 22 2016 10:55 : 00 GMT+0800 (CST) cmh0:PRIMARY > rs.printSlaveReplicationInfo () source: 192.168.52.135 behind the primary source 27017 syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST) 0 secs (0 hrs) behind the primary source: 192.168.52.135 behind the primary source 27019 syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST) 0 secs (0 hrs) behind the primary
You can see that both Secondary nodes synchronize data in real time at the same time.
Configure 192.168.52.135VR 27017 as a deferred replication node:
Cmh0:PRIMARY > cfg=rs.conf () {"_ id": "cmh0", "version": 1, "members": [{"_ id": 1, "host": "192.168.52.128 id 27017", "arbiterOnly": false "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0 "votes": 1}, {"_ id": 2, "host": "192.168.52.135 votes 27017", "arbiterOnly": false, "buildIndexes": true "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1} {"_ id": 3, "host": "192.168.52.135 id 27019", "arbiterOnly": false, "buildIndexes": true, "hidden": false "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}], "settings": {"chainingAllowed": true "heartbeatTimeoutSecs": 10, "getLastErrorModes": {}, "getLastErrorDefaults": {"w": 1 "wtimeout": 0}} cmh0:PRIMARY > cfg.members [1] .priority = 00cmh0:PRIMARY > cfg.members [1] .slaveDelay = 3030cmh0:PRIMARY > rs.reconfig (cfg) {"ok": 1} cmh0:PRIMARY > rs.conf () {"_ id": "cmh0", "version": 2, "members": [{"_ id": 1, "host": "192.168.52.128 id 27017" "arbiterOnly": false, "buildIndexes": true, "hidden": false, "priority": 1, "tags": {}, "slaveDelay": 0 "votes": 1}, {"_ id": 2, "host": "192.168.52.135 votes 27017", "arbiterOnly": false, "buildIndexes": true "hidden": false, "priority": 0, "tags": {}, "slaveDelay": 30, "votes": 1} {"_ id": 3, "host": "192.168.52.135 id 27019", "arbiterOnly": false, "buildIndexes": true, "hidden": false "priority": 1, "tags": {}, "slaveDelay": 0, "votes": 1}], "settings": {"chainingAllowed": true "heartbeatTimeoutSecs": 10, "getLastErrorModes": {}, "getLastErrorDefaults": {"w": 1, "wtimeout": 0}
You can see that the value of "slaveDelay": 30 appears on the 192.168.52.135displacement 27017 node, indicating that the synchronization time of the node has been delayed by 30 seconds.
Specific you can test, the delay in replication time will be about 30 seconds. It is important to note that the system time of the mongodb must be consistent, otherwise it will cause a delayed replication exception, resulting in no synchronization operation after the specified synchronization time is up.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.