In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
MongoDB replication set
MongoDB replication is the process of synchronizing data on multiple servers.
Replication provides redundant backup of data, and stores copies of data on multiple servers, which improves the availability of data and ensures the security of data.
Replication also allows you to recover data from hardware failures and service disruptions. Advantages of replication sets ensure data security data high availability (2407) disaster recovery does not require downtime maintenance (such as backup, re-indexing, compression) distributed read data MongoDB replication principle
Replication of mongodb requires at least two nodes. One of them is the master node, which is responsible for handling client requests, and the rest are slave nodes, which are responsible for replicating the data on the master node.
The common collocation of each node of mongodb is: one master, one slave, one master and multiple slaves.
The master node records all operations on it oplog, and the slave node periodically polls the master node for these operations, and then performs these operations on its own data copy, thus ensuring that the data of the slave node is consistent with that of the master node.
The MongoDB replication structure diagram is as follows:
In the above structure diagram, the client reads data from the master node, and when the client writes data to the master node, the master node interacts with the slave node to ensure the consistency of the data.
Replication set characteristics: n-node cluster any node can act as the master node all write operations on the master node automatic failover automatic recovery installation MongoDB (tar installation): 1. Install and create multiple instances: tar-zxvf mongodb-linux-x86_64-3.2.1.tgz-C / usr/local/cd / usr/local/mv mongodb-linux-x86_64-3.2.1 / mongodb # rename mkdir-p / data/mongodb/mongodb {1Mague 2J 3jue 4} # create the data directory mkdir-p / data/logstouch / data/logs/mongodb {1Mague 2 3 data/logs/chmod 4} .log # create log file cd / usr/local/mongodb/binvim mongodb1.confport=27017dbpath=/data/mongodb/mongodb1logpath=/data/logs/mongodb1.loglogappend=truefork=truemaxConns=5000storageEngine=mmapv1replSet=test 777 * .log # gives permission to cd / usr/local/mongodb/binvim mongodb1.confport=27017dbpath=/data/mongodb/mongodb1logpath=/data/logs/mongodb1.loglogappend=truefork=truemaxConns=5000storageEngine=mmapv1replSet=test # replication set name-- -- # here are the changes made during yum installation, Add replication: # to comment replSetName: test # add replication set name ln-s / usr/local/mongodb/bin/mongod / usr/bin/ # add common mongodb commands to system commands ln-s / usr/local/mongodb/bin/mongo / usr/bin/2. Enable multiple instances and initialize the configuration replication set: [root@localhost bin] # mongo > cfg= {"_ id": "test", "members": [{"_ id": 0, "host": "192.168.217.129cfg= 27017"}, {"_ id": 1, "host": "192.168.217.129cfg= 27018"}, {"_ id": 2, "host": "192.168.217.129cfg= 27019"}]} # configure the replication set Note that the name of the replication set should be the same {"_ id": "test", "members": [{"_ id": 0, "host": "192.168.217.129 members 27017"}, {"_ id": 1, "host": "192.168.217.129members 27018"} {"_ id": 2, "host": "192.168.217.129 rs.initiate 27019"}]} > rs.initiate (cfg) # when initializing the configuration, ensure that the slave node has no data {"ok": 1} test:PRIMARY > rs.status () # View the full status of the replication set {"set": "test",. {"_ id": 0, "name": "192.168.217.129 state 27017", "health": 1, "state": 1, "stateStr": "PRIMARY", # 27017 port as the master node "uptime": 1234 Optime: {"ts": Timestamp (1531961046, 1), "t": NumberLong (1)},. }, {"_ id": 1, "name": "192.168.217.129 state 27018", "health": 1, "state": 2, "stateStr": "SECONDARY", # slave node "uptime": 49 Optime: {"ts": Timestamp (1531961046, 1), "t": NumberLong (1)},. }, {"_ id": 2, "name": "192.168.217.129 state 27019", "health": 1, "state": 2, "stateStr": "SECONDARY", # slave node "uptime": 49 Optime: {"ts": Timestamp (1531961046, 1), "t": NumberLong (1)},. ], "ok": 1} 3. Add and remove nodes: test:PRIMARY > rs.add ("192.168.217.129 test:PRIMARY 27020") # add nodes {"ok": 1} node > rs.status (). "_ id": 3, "name": "192.168.217.129 health", "health": 1, "state": 2, "stateStr": "SECONDARY" .test: PRIMARY > rs.remove ("192.168.217.129 bank 27020") # Delete node {"ok": 1} test:PRIMARY > rs.status () .4. Simulate the failure and check whether the master node switches automatically: [root@localhost bin] # mongod-f mongodb1.conf-- shutdown # closes the master node port killing process with pid: 3552 [root@localhost bin] # mongo-- port 27018 test:SECONDARY > rs.status () {"set": "test",. {"_ id": 0, "name": "192.168.217.129 health", "health": 0, "state": 8, "stateStr": "(not reachable/healthy)", "uptime": 0, "optime": {. {"_ id": 1, "name": "192.168.217.129 health", "state": 2, "stateStr": "SECONDARY", "uptime": 1811,. {"_ id": 2, "name": "192.168.217.129 health 27019", # automatically switch master node "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 712,. ], "ok": 1} 5. Manually switch the master node: [root@localhost bin] # mongo-- port 27019 test:PRIMARY > rs.freeze (30) # suspend for 30s not to participate in the election {"ok": 1} test:PRIMARY > rs.stepDown (60Magazine 30) # hand over the master node position, maintain the slave node state for not less than 60 seconds, wait 30 seconds to make the master node and slave node log synchronization test:SECONDARY > rs.status (). 6. Allow reading data from nodes: test:SECONDARY > show dbs # 2018-07-19T09:04:34.898+0800 E QUERY [thread1] Error: listDatabases failed: {"ok": 0, "errmsg": "not master and slaveOk=false" "code": 13435}: _ getErrorWithCode@src/mongo/shell/utils.js:23:13Mongo.prototype.getDBs@src/mongo/shell/mongo.js:53:1shellHelper.show@src/mongo/shell/utils.js:700:19shellHelper@src/mongo/shell/utils.js:594:15@ (shellhelp2): 1:1test:SECONDARY > rs.slaveOk () # allows data test:SECONDARY > show dbslocal 1.078GB7 to be read from the node by default. Change oplog size: test:PRIMARY > use localswitched to db localtest:PRIMARY > rs.printReplicationInfo () # View the size that log files can use default oplog size takes up 5% of the available disk space for 64-bit instances configured oplog size: 95.37109375MBlog length start to end: 1103secs (0.31hrs) oplog first event time: Thu Jul 19 2018 08:43:55 GMT+0800 (CST) oplog last event time: Thu Jul 19 2018 09:02:18 GMT+0800 (CST) now: Thu Jul 19 2018 09:20:08 GMT+0800 (CST) test:PRIMARY > db.runCommand ({"convertToCapped": "oplog.rs" "size": 10000000000}) # revision unit: B {"ok": 1} test:PRIMARY > rs.printReplicationInfo () configured oplog size: 9536.746032714844MBlog length start to end: 1103secs (0.31hrs) oplog first event time: Thu Jul 19 2018 08:43:55 GMT+0800 (CST) oplog last event time: Thu Jul 19 2018 09:02:18 GMT+0800 (CST) now: Thu Jul 19 2018 09:20:24 GMT+0800 (CST) test:PRIMARY >
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.