In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
MongoDB replication set
MongoDB replication is the process of synchronizing data to multiple servers
Replication sets provide redundant backups of data and improve data availability, usually ensuring the security of data.
Replication sets also allow you to recover data from hardware failures and service disruptions.
What is a replication set? Ensure data security data high availability disaster recovery without downtime maintenance (such as backup, re-indexing, compression) distributed read data replica set is transparent to the application layer MongoDB replication set working principle mongodb replication set requires at least two nodes. One of them is the master node, which is responsible for handling client requests, and the rest are slave nodes, which are responsible for replicating the data on the master node. The common collocation of each node of mongodb is: one master, one slave, one master and multiple slaves. The master node records all operations on it oplog, and the slave node periodically polls the master node for these operations, and then performs these operations on its own data copy, thus ensuring that the data of the slave node is consistent with that of the master node. The MongoDB replication structure diagram is as follows:
(the above picture is referenced at http://www.runoob.com/mongodb/mongodb-replication.html)
In the above structure diagram, the client reads data from the master node, and when the client writes data to the master node, the master node interacts with the slave node to ensure the consistency of the data.
Characteristics of replication sets: any node in a cluster of N nodes can act as a master node. All writes are automatically failed over and automatically restored on the master node. How to deploy a MongoDB replication set?
Create multiple instances of MongoDB (4 instances) on the same server to do the MongoDB master-slave experiment.
As for how to install MongoDB? Please refer to the previous blog post on Linux platform to install MongoDB 4.0 (the latest version)
Start deploying 1. Create 4 instances of MongoDB # create the data directory of each instance mkdir-p / data/mongodb/mongodb {1Magazine 2Magazine 4} # create the instance configuration directory mkdir-p / data/conf/# create the instance log directory mkdir-p / data/logs/# create the log file touch / data/logs/mongodb {1J 2J 3J 4} .log # give the log file permission 777chmod 777 / data/logs/*.log2. Edit mongodb1.conf profile Enable replication set function and configure replSetName parameter vim / data/mongodb/mongodb1.conf#mongod.conf#for documentation of all options Log file path of see:# http://docs.mongodb.org/manual/reference/configuration-options/#where to write logging data.systemLog: destination: file logAppend: true path: / data/logs/mongodb1.log / / mongodb1 # Where and how to store data.storage: dbPath: / data/mongodb/mongodb1/ mongodb1 data file path journal: enabled: true#engine:#mmapv1:#wiredTiger:#how the process runsprocessManagement : fork: true # fork and run in background pidFilePath: / data/mongodb/mongodb1/mongod.pid # location of pidfile timeZoneInfo: / usr/share/zoneinfo#network interfacesnet: port: 27017 / / mongodb1 process number bindIp: 0.0.0.0 # Listen to local interface only Comment to listen on all interfaces.#security:#operationProfiling:replication: / / Delete "#" Enable replication set function replSetName: test-rc / / name is test-rc#sharding:##Enterprise-Only Options#auditLog:#snmp:3. Copy the default mongodb1.conf configuration file and generate the other three instance configuration files # copy the default instance configuration file cp-p / data/mongodb/mongodb1.conf / data/conf/mongodb2.confcp-p / data/mongodb/mongodb1.conf / data/conf/mongodb3.confcp-p / data/mongodb/mongodb1.conf / data/conf/mongodb4.conf4. Modify mongodb2.conf, mongodb3.conf, and mongodb4.conf configuration files respectively The following MongoDB2 configuration file cat / data/conf/mongodb2.conf#mongod.conf#for documentation of all options Log file path of see:# http://docs.mongodb.org/manual/reference/configuration-options/#where to write logging data.systemLog: destination: file logAppend: true path: / data/logs/mongodb2.log / / mongodb2 # Where and how to store data.storage: dbPath: / data/mongodb/mongodb2/ mongodb2 data file path journal: enabled: true#engine:#mmapv1:#wiredTiger:#how the process runsprocessManagement : fork: true # fork and run in background pidFilePath: / data/mongodb/mongodb1/mongod.pid # location of pidfile timeZoneInfo: / usr/share/zoneinfo#network interfacesnet: port: 27018 / / mongodb2 process number bindIp: 0.0.0.0 # Listen to local interface only Comment to listen on all interfaces.#security:#operationProfiling:replication: / / Delete "#" Enable replication set function replSetName: test-rc# name is test-rc#sharding:##Enterprise-Only Options#auditLog:#snmp:MongoDB3 configuration file cat / data/conf/mongodb3.conf#mongod.conf#for documentation of all options Log file path of see:# http://docs.mongodb.org/manual/reference/configuration-options/#where to write logging data.systemLog: destination: file logAppend: true path: / data/logs/mongodb3.log / / mongodb3 # Where and how to store data.storage: dbPath: / data/mongodb/mongodb3/ mongodb3 data file path journal: enabled: true#engine:#mmapv1:#wiredTiger:#how the process runsprocessManagement : fork: true # fork and run in background pidFilePath: / data/mongodb/mongodb1/mongod.pid # location of pidfile timeZoneInfo: / usr/share/zoneinfo#network interfacesnet: port: 27019 / / mongodb3 process number bindIp: 0.0.0.0 # Listen to local interface only Comment to listen on all interfaces.#security:#operationProfiling:replication: / / Delete "#" Enable replication set function replSetName: test-rc# name is test-rc#sharding:##Enterprise-Only Options#auditLog:#snmp:MongoDB4 configuration file cat / data/conf/mongodb4.conf#mongod.conf#for documentation of all options Log file path of see:# http://docs.mongodb.org/manual/reference/configuration-options/#where to write logging data.systemLog: destination: file logAppend: true path: / data/logs/mongodb4.log / / mongodb4 # Where and how to store data.storage: dbPath: / data/mongodb/mongodb4/ mongodb4 data file path journal: enabled: true#engine:#mmapv1:#wiredTiger:#how the process runsprocessManagement : fork: true # fork and run in background pidFilePath: / data/mongodb/mongodb1/mongod.pid # location of pidfile timeZoneInfo: / usr/share/zoneinfo#network interfacesnet: port: 27020 / / mongodb4 process number bindIp: 0.0.0.0 # Listen to local interface only Comment to listen on all interfaces.#security:#operationProfiling:replication: / / Delete "#" Enable replication set function replSetName: test-rc / / name is test-rc#sharding:##Enterprise-Only Options#auditLog:#snmp:5. Launch mongodb multi-instance for i in 1 2 3 4do mongod-f / data/conf/mongodb$i.confdone to check the process information of mongod [root@localhost conf] # netstat-tunlp | grep mongod
6. Start configuring a three-node replication set 6.1Login to the default MongoDB (default port number is 27017) mongo6.2 to view the status information of the replication set > rs.status ()
Define cfg initialization parameters (add three first, and add node function later) > cfg= {"_ id": "test-rc", "members": [{"_ id": 0, "host": "192.168.100.100id 27017"}, {"_ id": 1, "host": "192.168.100.100lug 27018"}, {"_ id": 2, "host": "192.168.100.100id 27019"}]}
6.4 start the replication set function (ensure no data from the slave node when initializing the configuration) > rs.initiate (cfg)
6.5 View the status information of the replication set test-rc:PRIMARY > rs.status () {"set": "test-rc", "date": ISODate ("2018-07-14T04:46:58.710Z"), "myState": 1, "term": NumberLong (1), "syncingTo": "syncSourceHost": "", "syncSourceId":-1, "heartbeatIntervalMillis": NumberLong (2000) Optimes: {"lastCommittedOpTime": {"ts": Timestamp (1531543618, 1), "t": NumberLong (1)}, "readConcernMajorityOpTime": {"ts": Timestamp (1531543618, 1), "t": NumberLong (1)} "appliedOpTime": {"ts": Timestamp (1531543618, 1), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (1531543618, 1), "t": NumberLong (1)}, "lastStableCheckpointTimestamp": Timestamp (1531543608, 1) "members": [{"_ id": 0, "name": "192.168.100.100 name", "health": 1, / / Health status "state": 1, / / 1: primary node 2: for slave node "stateStr": "PRIMARY", / / master node "uptime": 2886, "optime": {"ts": Timestamp (1531543618, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-14T04:46:58Z") "syncingTo": "," syncSourceHost ":", "syncSourceId":-1, "infoMessage": "", "electionTime": Timestamp (1531543426, 1), "electionDate": ISODate ("2018-07-14T04:43:46Z"), "configVersion": 1, "self": true "lastHeartbeatMessage": "}, {" _ id ": 1," name ":" 192.168.100.100 name "," health ": 1, / / Health status" state ": 2, / / 1: primary node 2: slave node "stateStr": "SECONDARY", / / slave node "uptime": 202, "optime": {"ts": Timestamp (1531543608, 1), "t": NumberLong (1)} OptimeDurable: {"ts": Timestamp (1531543608, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-14T04:46:48Z"), "optimeDurableDate": ISODate ("2018-07-14T04:46:48Z") "lastHeartbeat": ISODate ("2018-07-14T04:46:56.765Z"), "lastHeartbeatRecv": ISODate ("2018-07-14T04:46:57.395Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": "", "syncingTo": "192.168.100.10014T04:46:56.765Z 27017" "syncSourceHost": "192.168.100.100 configVersion 27017", "syncSourceId": 0, "infoMessage": "", "configVersion": 1}, {"_ id": 2, "name": "192.168.100.100displacement 27019", "health": 1 / / Health status "state": 2, / / 1: primary node 2: slave node "stateStr": "SECONDARY", / / slave node "uptime": 202, "optime": {"ts": Timestamp (1531543608, 1), "t": NumberLong (1)} OptimeDurable: {"ts": Timestamp (1531543608, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-14T04:46:48Z"), "optimeDurableDate": ISODate ("2018-07-14T04:46:48Z") "lastHeartbeat": ISODate ("2018-07-14T04:46:56.769Z"), "lastHeartbeatRecv": ISODate ("2018-07-14T04:46:57.441Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": "", "syncingTo": "192.168.100.10014T04:46:56.769Z 27017" "syncSourceHost": "192.168.100.100 configVersion 27017", "syncSourceId": 0, "infoMessage": "", "configVersion": 1}], "ok": 1, "operationTime": Timestamp (1531543618, 1), "$clusterTime": {"clusterTime": Timestamp (1531543618, 1) "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId": NumberLong (0)}}
Special reminder: as you can see above, the master node is on the 192.168.100.100purl 27017 node.
Note: any modification from the library is strictly prohibited.
7. Add node test-rc:PRIMARY > rs.add ("192.168.100.100pur27020")
View status information of replication set test-rc:PRIMARY > rs.status ()
8. Delete node test-rc:PRIMARY > rs.remove ("192.168.100.100pur27018")
View status information of replication set test-rc:PRIMARY > rs.status ()
It is found that the 192.168.100.100purl 27018 node has no relevant information.
9. Failover 9.1 exit MongoDBtest-rc:PRIMARY > exit9.2 View mongod process information netstat-tunlp | grep mongod
You can view the process information of 4 instances.
End the master node: the process with port number 27017, check whether kill-9 48211 can be switched automatically.
9.4 Log in to the instance mongo with port number 27019 in MongoDB-- port 270199.5 to view the status information of each node test-rc:PRIMARY > rs.status () {"_ id": 2, "name": "192.168.100.100 test-rc:PRIMARY 27019", "health": 1, "state": 1, "stateStr": "PRIMARY" "uptime": 1547, "optime": {"ts": Timestamp (1531544567, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2018-07-14T05:02:47Z"), "syncingTo": "," syncSourceHost ": "syncSourceId":-1, "infoMessage": "", electionTime: Timestamp (1531544345, 1), "electionDate": ISODate ("2018-07-14T04:59:05Z"), "configVersion": 3, "self": true, "lastHeartbeatMessage": ""} {"_ id": 3, "name": "192.168.100.100 state 27020", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 700, "optime": {"ts": Timestamp (1531544567, 1) "t": NumberLong (2)}, optimeDurable: {"ts": Timestamp (1531544567, 1), "t": NumberLong (2)}, "optimeDate": ISODate ("2018-07-14T05:02:47Z") OptimeDurableDate: ISODate ("2018-07-14T05:02:47Z"), "lastHeartbeat": ISODate ("2018-07-14T05:02:56.150Z"), "lastHeartbeatRecv": ISODate ("2018-07-14T05:02:56.289Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": " "syncingTo": "192.168.100.100 infoMessage 27019", "syncSourceHost": "192.168.100.100 syncSourceId 27019", "infoMessage": "", "configVersion": 3}
Special reminder: the master node has reached the 192.168.100.100pur27019 node, indicating that the master node has been automatically switched.
10. Manually switch master node 10.1 suspend 30s not to participate in the election test-rc:PRIMARY > rs.freeze (30) {"operationTime": Timestamp (1531544867, 1), "ok": 0, "errmsg": "cannot freeze node when primary or running for election. State: Primary "," code ": 95," codeName ":" NotSecondary "," $clusterTime ": {" clusterTime ": Timestamp (1531544867, 1)," signature ": {" hash ": BinData (0," AAAAAAAAAAAAAAAAAAAAAAAAAAA= ")," keyId ": NumberLong (0)} 10.2 hand over the master node position and maintain the slave node state for not less than 60 seconds Wait 30 seconds for master and slave node logs to synchronize
Test-rc:PRIMARY > rs.stepDown (600.30)
> test-rc:PRIMARY > rs.stepDown (60Magazine 30) 2018-07-14T01:08:07.326-0400 E QUERY [js] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host' 127.0.0.1 14T01:08:07.326 27019': DB.prototype.runCommand@src/mongo/shell/db.js:168:1DB.prototype.adminCommand@src/mongo/shell/db.js:186:16rs.stepDown@src/mongo/shell/utils.js:1398:12@ ( Shell: 1NETWORK 12018-07-14T01:08:07.328-0400 I NETWORK [js] trying reconnect to 127.0.0.1 trying reconnect to 27019 failed2018-07-14T01:08:07.329-0400 I NETWORK [js] reconnect 127.0.0.1 ok10.3 for viewing replication set status > test-rc:PRIMARY > rs.status () {"set": "test-rc" "date": ISODate ("2018-07-14T05:10:31.161Z"), "myState": 2, "term": NumberLong (3), "syncingTo": "192.168.100.100 term 27020", "syncSourceHost": "192.168.100.100 14T05:10:31.161Z 27020", "syncSourceId": 3, "heartbeatIntervalMillis": NumberLong (2000) Optimes: {"lastCommittedOpTime": {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}, "readConcernMajorityOpTime": {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)} AppliedOpTime: {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}, "durableOpTime": {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}}, "lastStableCheckpointTimestamp": Timestamp (1531545018, 1) "members": [{"_ id": 0, "name": "192.168.100.100 name", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 70 "optime": {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}, "optimeDate": ISODate ("2018-07-14T05:10:28Z"), "syncingTo": "192.168.100.100 Timestamp 27020", "syncSourceHost": "192.168.100.100 Timestamp 27020" "syncSourceId": 3, "infoMessage": "", "configVersion": 3, "self": true, "lastHeartbeatMessage": ""}, {"_ id": 2, "name": "192.168.100.100 configVersion 27019", "health": 1 State: 2, stateStr: SECONDARY, uptime: 68, optime: {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}, "optimeDurable": {"ts": Timestamp (1531545028) 1), "t": NumberLong (3)}, "optimeDate": ISODate ("2018-07-14T05:10:28Z"), "optimeDurableDate": ISODate ("2018-07-14T05:10:28Z"), "lastHeartbeat": ISODate ("2018-07-14T05:10:30.079Z") "lastHeartbeatRecv": ISODate ("2018-07-14T05:10:31.094Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": "", "syncingTo": "192.168.100.100 14T05:10:31.094Z 27020", "syncSourceHost": "192.168.100.100 14T05:10:31.094Z 27020", "syncSourceId": 3 "infoMessage": "," configVersion ": 3}, {" _ id ": 3," name ":" 192.168.100.100 name "," health ": 1," state ": 1," stateStr ":" PRIMARY "," uptime ": 68 Optime: {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)}, "optimeDurable": {"ts": Timestamp (1531545028, 1), "t": NumberLong (3)} OptimeDate: ISODate ("2018-07-14T05:10:28Z"), "optimeDurableDate": ISODate ("2018-07-14T05:10:28Z"), "lastHeartbeat": ISODate ("2018-07-14T05:10:30.079Z"), "lastHeartbeatRecv": ISODate ("2018-07-14T05:10:29.561Z"), "pingMs": NumberLong (0) "lastHeartbeatMessage": "," syncingTo ":", "syncSourceHost": "," syncSourceId ":-1," infoMessage ":"," electionTime ": Timestamp (1531544897, 1)," electionDate ": ISODate (" 2018-07-14T05:08:17Z ") "configVersion": 3}], "ok": 1, "operationTime": Timestamp (1531545028, 1), "$clusterTime": {"clusterTime": Timestamp (1531545028, 1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId": NumberLong (0)}
Special reminder: the master node is already on the 192.168.100.100pur27020 node.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.