Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of MongoDB replication sets and administrative maintenance of replication sets on CentOS7

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Overview of MongoDB replication sets

Replication sets are additional replicas of data, the process of synchronizing data across multiple servers, providing redundancy and increasing data availability, through which hardware failures and interrupted services can be restored.

How replication sets work MongoDB requires at least two nodes for the replication set. One of them is the master node (primary), which is responsible for processing the client's requests, and the rest is the slave node (Secondary), which is responsible for replicating the data on the master node. The common collocation of each node in MongoDB is: one master, one slave or one master and multiple slaves. The master node records all the operations on it to the oplog, polls the master node periodically to obtain these operations, and then performs these operations on its own copies, so as to ensure that the data of the slave node is consistent with that of the master node. Characteristics of replication sets A cluster of N nodes any node can act as a master node all writes are automatically failed over on the primary node MongoDB replication set deployment

1. Configure replication set

(1) create data file and log file storage path

[root@localhost ~] # mkdir-p / data/mongodb/mongodb {2 root@localhost mongodb 3 touch logs/mongodb 4} [root@localhost ~] # cd / data/mongodb/ [root@localhost mongodb] # mkdir logs [root@localhost mongodb] # touch logs/mongodb {2 Magazine 4} .log [root@localhost mongodb] # cd logs/ [root@localhost logs] # lsmongodb2.log mongodb3.log mongodb4.log [root@localhost logs] # chmod 777 *. Log

(2) Edit the configuration files of 4 MongoDB instances

Edit the configuration file of Mongodb first, configure replSet parameter values as kgcrs, and make 3 copies. The specific actions are as follows:

[root@localhost etc] # vim mongod.conf path: / var/log/mongodb/mongod.log# Where and how to store data.storage: dbPath: / var/lib/mongo journal: enabled: true# engine:# mmapv1:# wiredTiger:# how the process runsprocessManagement: fork: true# fork and run in background pidFilePath: / var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: / usr/share/zoneinfo# network interfacesnet: port: 27017 bindIp: 0. 0.0.0 # Listen to local interface only Comment to listen on all interfaces.#security:#operationProfiling:replication: replSetName: kgcrs # sharding:## Enterprise-Only Options#auditLog:#snmp:

Then configure the port parameter in mongodb2.conf to configure the port parameter in 27018 Mongodb3.conf to configure the port parameter in 27019 Mongodb4.conf to 27020. The dbpath and logpath parameters are also modified to the corresponding path values.

(3) start the 4 MongoDB nodes and view the process information

[root@localhost etc] # mongod-f / etc/mongod.conf-- shutdown / / close / / [root@localhost etc] # mongod-f / etc/mongod.conf / / turn on / / [root@localhost etc] # mongod-f / etc/mongod2.conf [root@localhost etc] # mongod-f / etc/mongod3.conf [root@localhost etc] # mongod-f / etc/mongod4.conf [root@localhost etc] # netstat-ntap | grep mongodtcp 2000. 0.0.0 27019 0.0.0.0 * LISTEN 17868/mongod tcp 0 0 0.0.0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 LISTEN 17116/mongod tcp 0 0 0.0.0 0 27018 0.0.0 0 15 * LISTEN 17413/mongod

(4) configure replication sets with three nodes

[root@localhost etc] # mongo > rs.status () / / View replication set / / {"info": "run rs.initiate (...) If not yet done for the set "," ok ": 0," errmsg ":" no replset config has been received "," code ": 94," codeName ":" NotYetInitialized "," $clusterTime ": {" clusterTime ": Timestamp (0,0)," signature ": {" hash ": BinData (0," AAAAAAAAAAAAAAAAAAAAAAAAAAA= ") "keyId": NumberLong (0)} > cfg= {"_ id": "kgcrs", "members": [{"_ id": 0, "host": "192.168.126.132 cfg= 27017"}, {"_ id": 1, "host": "192.168.126.132 cfg= 27018"}, {"_ id": 2 "host": "192.168.126.132 kgcrs 27019"}]} / / add replication set / / {"_ id": "kgcrs", "members": [{"_ id": 0, "host": "192.168.126.132Switzerland 27017"}, {"_ id": 1 "host": "192.168.126.132 host 27018"}, {"_ id": 2, "host": "192.168.126.132 purge 27019"}]} > rs.initiate (cfg) / / guarantee no data from the slave node when initializing the configuration / /

(5) View replication set status

After starting the replication set, view the full status information of the replication set again through the rs.status () command

Kgcrs:SECONDARY > rs.status () {"set": "kgcrs", "date": ISODate ("2018-07-17T07:18:52.047Z"), "myState": 1, "term": NumberLong (1), "syncingTo": "," syncSourceHost ":", "syncSourceId":-1, "heartbeatIntervalMillis": NumberLong (2000) Optimes: {"lastCommittedOpTime": {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)}, "readConcernMajorityOpTime": {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)} "appliedOpTime": {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)}}, "members": [{"_ id": 0 "name": "192.168.126.132 state 27017", "health": 1, "state": 1, "stateStr": "PRIMARY", / / master node / / "uptime": 2855, "optime": {"ts": Timestamp (1531811928, 1) "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-17T07:18:48Z"), "syncingTo": "," syncSourceHost ":"," syncSourceId ":-1," infoMessage ":" could not find member to sync from " "electionTime": Timestamp (1531811847, 1), "electionDate": ISODate ("2018-07-17T07:17:27Z"), "configVersion": 1, "self": true, "lastHeartbeatMessage": "}, {" _ id ": 1 "name": "192.168.126.132 state 27018", "health": 1, "state": 2, "stateStr": "SECONDARY", / / slave node / / "uptime": 95, "optime": {"ts": Timestamp (1531811928, 1) "t": NumberLong (1)}, optimeDurable: {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-17T07:18:48Z") OptimeDurableDate: ISODate ("2018-07-17T07:18:48Z"), "lastHeartbeat": ISODate ("2018-07-17T07:18:51.208Z"), "lastHeartbeatRecv": ISODate ("2018-07-17T07:18:51.720Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": " "syncingTo": "192.168.126.132 syncingTo 27017", "syncSourceHost": "192.168.126.132 syncSourceId 27017", "syncSourceId": 0, "infoMessage": "", "configVersion": 1}, {"_ id": 2 "name": "192.168.126.132 state 27019", "health": 1, "state": 2, "stateStr": "SECONDARY", / / slave node / / "uptime": 95, "optime": {"ts": Timestamp (1531811928 ), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1531811928, 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-07-17T07:18:48Z") OptimeDurableDate: ISODate ("2018-07-17T07:18:48Z"), "lastHeartbeat": ISODate ("2018-07-17T07:18:51.208Z"), "lastHeartbeatRecv": ISODate ("2018-07-17T07:18:51.822Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": " "syncingTo": "192.168.126.132 syncingTo 27017", "syncSourceHost": "192.168.126.132 syncSourceId 27017", "syncSourceId": 0, "infoMessage": "," configVersion ": 1}]," ok ": 1," operationTime ": Timestamp (1531811928, 1) "$clusterTime": {"clusterTime": Timestamp (1531811928, 1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId": NumberLong (0)}

A health of 1 indicates health and 0 indicates downtime. A state of 1 represents a master node and a value of 2 represents a slave node.

Ensure that there is no data on the slave node when initializing the configuration of the replication set

MongoDB replication set switchover

MongoDB replication sets can achieve the high availability of the cluster and automatically switch to other nodes when the primary node fails. You can also manually switch between the master and slave of the replication set.

1. Failover

[root@localhost etc] # ps aux | grep mongod / / View process / / root 17116 1.2 5.8 1546916 58140? Sl 14:31 0:51 mongod-f / etc/mongod.confroot 17413 1.05.7 1445624 57444? Sl 14:34 0:39 mongod-f / etc/mongod2.confroot 17868 1.2 5.5 1446752 55032? Sl 15:05 0:23 mongod-f / etc/mongod3.confroot 17896 0.84.7 1037208 47552? Sl 15:05 0:16 mongod-f / etc/mongod4.confroot 18836 0.0 112676 980 pts/1 S+ 15:38 grep-- color=auto mongod [root@localhost etc] # kill-9 17116 / / Kill 27017 process / / [root@localhost etc] # ps aux | grep mongodroot 17413 1.05.7 1453820 57456? Sl 14:34 0:40 mongod-f / etc/mongod2.confroot 17868 1.2 5.5 1454948 55056? Sl 15:05 0:24 mongod-f / etc/mongod3.confroot 17896 0.84.7 1037208 47552? Sl 15:05 0:16 mongod-f / etc/mongod4.confroot 18843 0.0 112676 976 pts/1 R + 15:38 grep-- color=auto mongod [root@localhost etc] # mongo-- port 27019kgcrs:PRIMARY > rs.status () "members": [{"_ id": 0, "name": "192.168.126.132 pts/1 27017", "health": 0 / / downtime status / / "state": 8, "stateStr": "(not reachable/healthy)", "uptime": 0, "optime": {"ts": Timestamp (0,0), "t": NumberLong (- 1) {"_ id": 1 "name": "192.168.126.132 state 27018", "health": 1, "state": 2, "stateStr": "SECONDARY", / / slave node / / "uptime": 1467, "optime": {"ts": Timestamp (1531813296, 1) "t": NumberLong (2)}, "optimeDurable": {"ts": Timestamp (1531813296, 1), "t": NumberLong (2)}, {"_ id": 2, "name": "192.168.126.132 Timestamp 27019" "health": 1, "state": 1, "stateStr": "PRIMARY", / / master node / / "uptime": 2178, "optime": {"ts": Timestamp (1531813296, 1), "t": NumberLong (2)}

two。 Manually switch between master and slave

Kgcrs:PRIMARY > rs.freeze (30) / / suspend for 30s do not participate in the election kgcrs:PRIMARY > rs.stepDown (60Power30) / / hand over the position of the master node and maintain the slave node state for no less than 60 seconds Wait 30 seconds to synchronize the master node and slave node QUERY 2018-07-17T15:46:19.079+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host' 127.0.0.1 purl 27019': DB.prototype.runCommand@src/mongo/shell/db.js:168:1DB.prototype.adminCommand@src/mongo/shell/db.js:186:16rs.stepDown@src/mongo/shell/utils.js:1341:12 @ (shell): 1thread1 12018-07-17T15:46:19.082+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1 trying reconnect to 27019 (127.0.0.1) failed2018-07-17T15:46:19.085+0800 I NETWORK [thread1] reconnect 127.0.1 trying reconnect to 27019 (127.0.0.1) okkgcrs:SECONDARY > / / immediately after surrendering the master node, it becomes a slave node / / kgcrs:SECONDARY > rs.status () "_ id": 0 "name": "192.168.126.132 state 27017", "health": 0, / / downtime status / / "state": 8, "stateStr": "(not reachable/healthy)", "uptime": 0, "optime": {"ts": Timestamp (0,0) "t": NumberLong (- 1)}, {"_ id": 1, "name": "192.168.126.132 name", "health": 1, "state": 1, "stateStr": "PRIMARY" / / Master node status / / "uptime": 1851, "optime": {"ts": Timestamp (1531813679, 1), "t": NumberLong (3) {"_ id": 2, "name": "192.168.126.132 NumberLong 27019", "health": 1 "state": 2, "stateStr": "SECONDARY", / / slave node status / / "uptime": 2563, "optime": {"ts": Timestamp (1531813689, 1), "t": NumberLong (3) the election principle of MongoDB replication set

Node types are divided into standard node (host), passive node (passive) and arbitration node (arbiter).

Only standard nodes may be elected as primary nodes and have the right to vote. The passive node has a complete copy and cannot become an active node and has the right to vote. Arbitration nodes do not copy data, can not become active nodes, only the right to vote. The difference between the standard node and the passive node: the high priority value is the standard node and the low value is the passive node. The election rule is that the person with the highest number of votes wins, and priority is the value with a priority of 0,000,000, which is equivalent to an additional number of votes of 01000. Election result: the person with the highest number of votes wins; if the number of votes is the same, the newcomer wins.

1. Configure the priority of the replication set

1) reconfigure the MongoDB replication set of 4 nodes, setting up two standard nodes, a passive node and a quorum node.

[root@localhost etc] # mongo > cfg= {"_ id": "kgcrs", "members": [{"_ id": 0, "host": "192.168.126.132cfg= 27017", "priority": 100}, {"_ id": 1, "host": "192.168.126.132cfg= 27018", "priority": 100}, {"_ id": 2, "host": "192.168.126.132 27019", "priority": 0}, {"_ id": 3 "host": "192.168.126.132 true 27020", "arbiterOnly": true}]} > rs.initiate (cfg) / / reconfigure / / kgcrs:SECONDARY > rs.isMaster () {"hosts": [/ / Standard Node / / "192.168.126.132true 27017", "192.168.126.132true 27018"] "passives": [/ / passive node / / "192.168.126.132 27019"], "arbiters": [/ / arbitration node / / "192.168.126.132 purl 27020"

2) simulate the failure of the primary node

If the primary node fails, another standard node will be elected as the new primary node

[root@localhost etc] # mongod-f / etc/mongod.conf-- shutdown / / Standard node 27017max / [root @ localhost etc] # mongo-- port 27018 / / the second standard node primary node / / kgcrs:PRIMARY > rs.status () "_ id": 0, "name": "192.168.126.13 etc/mongod.conf 27017", "health": 0 / / downtime status / / "state": 8, "stateStr": "(not reachable/healthy)", "uptime": 0, "optime": {"ts": Timestamp (0,0), "t": NumberLong (- 1) "_ id": 1 "name": "192.168.126.132 state 27018", "health": 1, "state": 1, "stateStr": "PRIMARY", / / standard node / / "uptime": 879, "optime": {"ts": Timestamp (1531817473, 1) "t": NumberLong (2) "_ id": 2, "name": "192.168.126.132 name", "health": 1, "state": 2, "stateStr": "SECONDARY", / / passive node / / "uptime": 569 "optime": {"ts": Timestamp (1531817473, 1), "t": NumberLong (2) "_ id": 3, "name": "192.168.126.132 Timestamp 27020", "health": 1, "state": 7, "stateStr": "ARBITER" / / Arbitration node / / "uptime": 569

3) simulate the failure of all standard nodes

All standard nodes fail and the passive node cannot become the primary node

[root@localhost etc] # mongod-f / etc/mongod2.conf-- shutdown / / close the standard node 27018 mongo / [root @ localhost etc] # mongo-- port 27019kgcrs:SECONDARY > rs.status () "_ id": 0, "name": "192.168.126.132 state 27017", "health": 0, / / downtime status / / "state": 8 "stateStr": "(not reachable/healthy)", "uptime": 0, "_ id": 1, "name": "192.168.126.132 id 27018", "health": 0, / / downtime status / / "state": 8, "stateStr": "(not reachable/healthy)" "uptime": 0, "_ id": 2, "name": "192.168.126.132 uptime 27019", "health": 1, "state": 2, "stateStr": "SECONDARY", / / passive node / / "uptime": 1403, "_ id": 3 "name": "192.168.126.132 state 27020", "health": 1, "state": 7, "stateStr": "ARBITER", / / Arbitration Node / / MongoDB replication set Management

1. Configuration allows data to be read from the node

The slave node of the default MongoDB replication set cannot read data, and you can use the rs.slaveOk () command to allow data to be read from the slave node.

[root@localhost etc] # mongo-- port 27017kgcrs:SECONDARY > show dbs / / unable to read database information / / 2018-07-17T17:11:31.570+0800 E QUERY [thread1] Error: listDatabases failed: {"operationTime": Timestamp (1531818690, 1), "ok": 0, "errmsg": "not master and slaveOk=false", "code": 13435 "codeName": "NotMaste kgcrs:SECONDARY > rs.slaveOk () kgcrs:SECONDARY > show dbsadmin 0.000GBconfig 0.000GBlocal 0.000GB

two。 View replication status information

You can use the rs.printReplicationInfo () and rs.printSlaveReplicationInfo () commands to view the replication set status.

Kgcrs:SECONDARY > rs.printReplicationInfo () configured oplog size: 990MBlog length start to end: 2092secs (0.58hrs) oplog first event time: Tue Jul 17 2018 16:41:48 GMT+0800 (CST) oplog last event time: Tue Jul 17 2018 17:16:40 GMT+0800 (CST) now: Tue Jul 17 2018 17:16:46 GMT+0800 (CST) kgcrs:SECONDARY > rs.printSlaveReplicationInfo () source: 192.168.126.132 Swiss 27017 syncedTo: Tue Jul 17 2018 17:16:50 GMT+0800 (CST) 0 secs (0 hrs) behind the primary source: 192.168.126.132 behind the primary source 27019 syncedTo: Tue Jul 17 2018 17:16:50 GMT+0800 (CST) 0 secs (0 hrs) behind the primary

3. Deploy certified replication

Kgcrs:PRIMARY > use adminkgcrs:PRIMARY > db.createUser ({"user": "root", "pwd": "123" "roles": ["root"]}) [root@localhost ~] # vim / etc/mongod.conf / / Edit four configuration files / /.... security: keyFile: / usr/bin/kgcrskey1 / / verify path / / clusterAuthMode: keyFile / / verify type / / [root@localhost ~] # vim / etc/mongod2.conf [root@localhost ~] # vim / etc/mongod3.conf [root@localhost ~ ] # vim / etc/mongod4.conf [root@localhost bin] # echo "kgcrskey" > kgcrskey1 / / generate key files for 4 instances / / [root@localhost bin] # echo "kgcrskey" > kgcrskey2 [root@localhost bin] # echo "kgcrskey" > kgcrskey3 [root@localhost bin] # echo "kgcrskey" > kgcrskey4 [root@localhost bin] # chmod 600 kgcrskey {1.. 4} [root@localhost bin] # mongod-f / etc/mongod.conf / / restart 4 instances Instance / / [root@localhost bin] # mongod-f / etc/mongod2.conf [root@localhost bin] # mongod-f / etc/mongod3.conf [root@localhost bin] # mongod-f / etc/mongod4.conf [root@localhost bin] # mongo-- port 27017 / / enter the standard node / / kgcrs:PRIMARY > show dbs / / cannot view database / / kgcrs:PRIMARY > rs.status () / / cannot view replication set / / kgcrs:PRIMARY > use Admin / / identity login authentication / / kgcrs:PRIMARY > db.auth ("root" Kgcrs:PRIMARY > show dbs / / you can view the database / / admin 0.000GBconfig 0.000GBlocal 0.000GBkgcrs:PRIMARY > rs.status () / / you can view the replication set / / "_ id": 0, "name": "192.168.126.132 admin 0.000GBconfig 0.000GBlocal 0.000GBkgcrs:PRIMARY 27017", "health": 1, "state": 1, "stateStr": "PRIMARY" "uptime": 411, "_ id": 1, "name": "192.168.126.132 name", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 324, "_ id": 2 "name": "192.168.126.132 state 27019", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 305, "_ id": 3, "name": "192.168.126.132state 27020", "health": 1 "state": 7, "stateStr": "ARBITER", "uptime": 280

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report