Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mongodb basic exercise-play with replica sets and node elections in the palm of your hand

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Introduction to replication set

A Mongodb replication set consists of a set of Mongod instances (processes), including a Primary node and multiple Secondary nodes. All data of the Mongodb Driver (client) is written to the data synchronously written by Primary,Secondary from the Primary to keep all members of the replication set storing the same dataset and provide high availability of data.

What is replication?

Ensure the security of data

High availability of data (2407)

Disaster recovery

No downtime maintenance (such as backup, re-indexing, compression)

Distributed read data

Copy set features:

N-node cluster

Any node can be used as the primary node

All writes are on the primary node

Automatic failover

Automatic recovery

Copy set diagram

Experimental environment

System: centos7

Mongodb version: v3.6.7

Operation flow

Add three instances

Mkdir-p / data/mongodb/mongodb {2jue 3j 4} / first create data file storage location mkdir-p / data/mongodb/logs/ log file location touch / data/mongodb/logs/mongodb {2jue 3 4} .log / log file chmod 777 / data/mongodb/logs/*.log / weight log file cd / data/mongodb/ / check whether the operation works [root@cent mongodb] # lslogs mongodb2 mongodb3 mongodb4 [root@cent mongodb] # cd logs/ [root@cent logs] # ll total usage 0-rwxrwxrwx. 1 root root 0 September 12 09:48 mongodb2.log-rwxrwxrwx. 1 root root 0 September 12 09:48 mongodb3.log-rwxrwxrwx. 1 root root 0 September 12 09:48 mongodb4.log

Edit the configuration file for 2.3.4

Vim / etc/mongod2.conf is modified as follows: systemLog:destination: file logAppend: true path: / / the following 3jue 4 will be changed to mongodb3.log,mongodb4.log storage: dbPath: / the following 3jue 4 will be changed to mongodb3,4 journal: enabled: truenet: port: / / the following 3jue 4 will be changed to 27019 bindIp: 0.0.0.0replication: replSetName: yang

Startup and detection

Open the service [root@cent logs] # mongod-f / etc/mongod2.conf about to fork child process, waiting until server is ready for connections.forked process: 83795child process started successfully, parent exiting [root@cent logs] # mongod-f / etc/mongod3.conf about to fork child process, waiting until server is ready for connections.forked process: 83823child process started successfully, parent exiting [root@cent logs] # mongod-f / etc/mongod4.conf about to fork child process, waiting until server is ready for connections.forked process: 83851child process started successfully Parent exiting [root@cent logs] # netstat-ntap / Inspection Port Respectively see 27017, 18pr, 19pr, 20

Landing test

[root@cent logs] # mongo-- port 27019 / designated port login MongoDB shell version v3.6.7connecting to: mongodb://127.0.0.1:27019/MongoDB server version: 3.6.7 >

Replication set operation

Define replication set

[root@cent logs] # mongo / / 27017 Port > cfg= {"_ id": "yang", "members": [{"_ id": 0, "host": "192.168.137.11 id 27017"}, {"_ id": 1, "host": "192.168.137.11 cfg= 27018"}, {"_ id": 2, "host": "192.168.137.11 cfg= 27019"}]} / define replication set {"_ id": "yang" "members": [{"_ id": 0, "host": "192.168.137.11 id 27017"}, {"_ id": 1, "host": "192.168.137.11 host"}, {"_ id": 2 "host": "192.168.137.11 ok 27019"}} launch > rs.initiate (cfg) {"ok": 1, "operationTime": Timestamp (1536848891, 1), "$clusterTime": {"clusterTime": Timestamp (1536848891, 1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=") "keyId": NumberLong (0)}}

View replication set information

Yang:OTHER > db.stats () {"db": "test", "collections": 0, "views": 0, "objects": 0, "avgObjSize": 0, "dataSize": 0, "storageSize": 0, "numExtents": 0, "indexes": 0, "indexSize": 0, "fileSize": 0, "fsUsedSize": 0, "fsTotalSize": 0 Ok: 1, operationTime: Timestamp (1536741495, 1), $clusterTime: {"clusterTime": Timestamp (1536741495, 1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId": NumberLong (0)}

View replication set status

Yang:SECONDARY > rs.status () {"set": "yang", "date": ISODate ("2018-09-12T08:58:56.358Z"), "myState": 1, "term": NumberLong (1), "syncingTo": "," syncSourceHost ":", "syncSourceId":-1, "heartbeatIntervalMillis": NumberLong (2000) Optimes: {"lastCommittedOpTime": {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)}, "readConcernMajorityOpTime": {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)} "appliedOpTime": {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)}, "durableOpTime": {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)}}, "members": [{"_ id": 0 "name": "192.168.137.11 name", "health": 1, "state": 1, "stateStr": "PRIMARY", "uptime": 24741, "optime": {"ts": Timestamp (1536742728, 1) "t": NumberLong (1)}, "optimeDate": ISODate ("2018-09-12T08:58:48Z"), "syncingTo": "," syncSourceHost ":"," syncSourceId ":-1," infoMessage ":"," electionTime ": Timestamp (1536741506, 1) "electionDate": ISODate ("2018-09-12T08:38:26Z"), "configVersion": 1, "self": true, "lastHeartbeatMessage": ""}, {"_ id": 1, "name": "192.168.137.11 12T08:38:26Z", "health": 1 State: 2, stateStr: SECONDARY, uptime: 1240, optime: {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1536742728) 1), "t": NumberLong (1)}, "optimeDate": ISODate ("2018-09-12T08:58:48Z"), "optimeDurableDate": ISODate ("2018-09-12T08:58:48Z"), "lastHeartbeat": ISODate ("2018-09-12T08:58:54.947Z") "lastHeartbeatRecv": ISODate ("2018-09-12T08:58:55.699Z"), "pingMs": NumberLong (0), "lastHeartbeatMessage": "", "syncingTo": "192.168.137.11 12T08:58:55.699Z 27017", "syncSourceHost": "192.168.137.11 12T08:58:55.699Z 27017", "syncSourceId": 0 "infoMessage": "", "configVersion": 1}, {"_ id": 2, "name": "192.168.137.11 name", "health": 1, "state": 2, "stateStr": "SECONDARY", "uptime": 1240 Optime: {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)}, "optimeDurable": {"ts": Timestamp (1536742728, 1), "t": NumberLong (1)} OptimeDate: ISODate ("2018-09-12T08:58:48Z"), "optimeDurableDate": ISODate ("2018-09-12T08:58:48Z"), "lastHeartbeat": ISODate ("2018-09-12T08:58:54.947Z"), "lastHeartbeatRecv": ISODate ("2018-09-12T08:58:55.760Z"), "pingMs": NumberLong (0) "lastHeartbeatMessage": "," syncingTo ":" 192.168.137.11 lastHeartbeatMessage 27017 "," syncSourceHost ":" 192.168.137.11 syncSourceHost "," syncSourceId ": 0," infoMessage ":", "configVersion": 1}], "ok": 1, "operationTime": Timestamp (1536742728) ), "$clusterTime": {"clusterTime": Timestamp (1536742728, 1), "signature": {"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId": NumberLong (0)}

Add and delete

When adding nodes, do not have data, otherwise data may be lost.

Add a 27020 node

Yang:PRIMARY > rs.add ("192.168.137.11 purl 27020")

{

"ok": 1

OperationTime: Timestamp (1536849195, 1)

"$clusterTime": {

ClusterTime: Timestamp (1536849195, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}

Check

Yang:PRIMARY > rs.status ()

{

"_ id": 3

"name": "192.168.137.11 27020", / / 27020 appears at the end

Delete 27020

Yang:PRIMARY > rs.remove ("192.168.137.11 purl 27020")

{

"ok": 1

OperationTime: Timestamp (1536849620, 1)

"$clusterTime": {

ClusterTime: Timestamp (1536849620, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}

Failover

Note: each instance in mongodb corresponds to a process, so when the process ends, the node shuts down in order to simulate the failure.

View the process

[root@cent mongodb] # ps aux | grep mongod

Root 74970 0.4 3.1 1580728 59392? Sl 21:47 0:15 mongod-f / etcmongod.conf

Root 75510 0.4 2.8 1465952 53984? Sl 22:16 0:07 mongod-f / etcmongod2.conf

Root 75538 0.4 2.9 1501348 54496? Sl 22:17 0:07 mongod-f / etcmongod3.conf

Root 75566 0.3 2.7 1444652 52144? Sl 22:17 0:06 mongod-f / etcmongod4.conf

End primary (27017)

[root@cent mongodb] # kill-9 74970

[root@cent mongodb] # ps aux | grep mongod

Root 75510 0.4 2.9 1465952 55016? Sl 22:16 0:10 mongod-f / etcmongod2.conf

Root 75538 0.4 2.9 1493152 55340? Sl 22:17 0:10 mongod-f / etcmongod3.conf

Root 75566 0.3 2.7 1444652 52168? Sl 22:17 0:08 mongod-f / etcmongod4.conf

Enter 27018 for inspection

ARY > rs.status ()

Yang:SECOND

"_ id": 0

"name": "192.168.137.11 purl 27017"

"health": 0, / / the health value is 0, indicating that the original primary has expired

"_ id": 2

"name": "192.168.137.11 purl 27019"

"health": 1

"state": 1

"stateStr": "PRIMARY", / / and this server grabs primary

Manual switching

Manual switching needs to be done under primary. Now primary is 27018.

Suspend the election for 30 seconds

[root@cent mongodb] # mongo-- port 27019

Yang:PRIMARY > rs.freeze (30)

{

"ok": 0

"errmsg": "cannot freeze node when primary or running for election. State: Primary"

"code": 95

"codeName": "NotSecondary"

OperationTime: Timestamp (1536851239, 1)

"$clusterTime": {

ClusterTime: Timestamp (1536851239, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}

Hand over the position of the master node, maintain the slave node state for not less than 60 seconds, and wait 30 seconds to make the master node synchronize with the slave node.

Yang:PRIMARY > rs.stepDown (600.30)

2018-09-13T23:07:48.655+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host' 127.0.0.1 purl 27019':

DB.prototype.runCommand@src/mongo/shell/db.js:168:1

DB.prototype.adminCommand@src/mongo/shell/db.js:186:16

Rs.stepDown@src/mongo/shell/utils.js:1341:12

@ (shell): 1:1

2018-09-13T23:07:48.658+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1 trying reconnect to 27019 (127.0.0.1) failed

2018-09-13T23:07:48.659+0800 I NETWORK [thread1] reconnect 127.0.0.1 reconnect 27019 (127.0.0.1) ok

Yang:SECONDARY > / / you can see that it has changed directly from

Node election

The replication set is initialized by the replSetInitiate command (or mongo shell's rs.initiate ()). After initialization, heartbeat messages are sent among the members, and an Priamry election operation is initiated. The node supported by the "majority" member vote will become Primary, and the rest of the nodes will become Secondary.

Going back to the step of defining the replication set above, here we change the statement slightly to add priority values and quorum nodes.

> cfg= {"_ id": "yang", "members": [{"_ id": 0, "host": "192.168.137.11 id 27017", "priority": 100}, {"_ id": 1, "host": "192.168.137.11 id 27018", "priority": 100}, {"_ id": 2, "host": "192.168.137.11 id 27019", "priority": 0}, {"_ id": 3 "host": "192.168.137.11 true 27020", "arbiterOnly": true}]}

{

"_ id": "yang"

"members": [

{

"_ id": 0

"host": "192.168.137.11 purl 27017"

"priority": 100 / / priority

}

{

"_ id": 1

"host": "192.168.137.11 purl 27018"

"priority": 100 / / priority

}

{

"_ id": 2

"host": "192.168.137.11 purl 27019"

"priority": 0 / / priority

}

{

"_ id": 3

"host": "192.168.137.11 purl 27020"

"arbiterOnly": true

}

]

}

Start cfg

> rs.initiate (cfg)

{

"ok": 1

OperationTime: Timestamp (1536852325, 1)

"$clusterTime": {

ClusterTime: Timestamp (1536852325, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}

View instance relationships and logs

Yang:OTHER > rs.isMaster ()

{

"hosts": [/ / Standard Node

"192.168.137.11 purl 27017"

"192.168.137.11 purl 27018"

]

"passives": [/ / passive node

"192.168.137.11 purl 27019"

]

"arbiters": [/ / dynamic node

"192.168.137.11 purl 27020"

]

"setName": "yang"

"setVersion": 1

"ismaster": false

"secondary": true

"me": "192.168.137.11 purl 27017"

"lastWrite": {

"opTime": {

Ts: Timestamp (1536852325, 1)

"t": NumberLong (- 1)

}

LastWriteDate: ISODate ("2018-09-13T15:25:25Z")

}

"maxBsonObjectSize": 16777216

"maxMessageSizeBytes": 48000000

"maxWriteBatchSize": 100000

LocalTime: ISODate ("2018-09-13T15:25:29.008Z")

"logicalSessionTimeoutMinutes": 30

"minWireVersion": 0

"maxWireVersion": 6

"readOnly": false

"ok": 1

OperationTime: Timestamp (1536852325, 1)

"$clusterTime": {

ClusterTime: Timestamp (1536852325, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}

Add a collection, do some basic operations, and let the log update.

Yang:SECONDARY > use mood

Switched to db mood

Yang:PRIMARY > db.info.insert ({"id": 1, "name": "mark"})

WriteResult ({"nInserted": 1})

Yang:PRIMARY > db.info.find ()

{"_ id": ObjectId ("5b9a8244b4360d88324a69fc"), "id": 1, "name": "mark"}

Yang:PRIMARY > db.info.update ({"id": 1}, {$set: {"name": "zhangsan"}})

WriteResult ({"nMatched": 1, "nUpserted": 0, "nModified": 1})

Yang:PRIMARY > db.info.find ()

{"_ id": ObjectId ("5b9a8244b4360d88324a69fc"), "id": 1, "name": "zhangsan"}

View the log

Yang:PRIMARY > use local

Switched to db local

Yang:PRIMARY > show collections

Me

Oplog.rs

Replset.election

Replset.minvalid

Startup_log

System.replset

System.rollback.id

There are a lot of yang:PRIMARY > db.oplog.rs.find () / /

Preemption test

First, shut down the primary node.

[root@cent] # mongod-f / etc/mongod.conf-- shutdown

Killing process with pid: 74970

Log in to the next node 27018 and find that it becomes the primary node and continues to close 27018. The two standard nodes are gone, and the experimental passive node will become the primary node.

Landing passive node

Yang:SECONDARY > rs.status ()

"_ id": 2

"name": "192.168.137.11 purl 27019"

"health": 1

"state": 2

"stateStr": "SECONDARY"

It is found that it will not start the two nodes that were shut down before.

[root@cent] # mongod-f / etc/mongod2.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 77132

Child process started successfully, parent exiting

[root@cent] # mongod-f / etc/mongod.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 77216

Child process started successfully, parent exiting

27018 of them have become the main node again.

[root@cent] # mongo-- port 27018

Yang:PRIMARY >

It can be seen that only the standard nodes will preempt each other.

Data permissions

Data is synchronized between Primary and Secondary through oplog. After the write operation on Primary is completed, an oplog,Secondary is continuously fetched new oplog from Primary and applied to a special set of local.oplog.rs.

Because the data of oplog will continue to increase, local.oplog.rs is set as a capped collection, and when the capacity reaches the upper limit of the configuration, the oldest data will be deleted. In addition, considering that oplog may be repeatedly applied on Secondary, oplog must be idempotent, that is, repeated applications will get the same results.

Only the master node has the permission to view the data, the slave node does not have the permission, and the following feedback will appear.

Yang:SECONDARY > show dbs

2018-09-13T23:58:34.112+0800 E QUERY [thread1] Error: listDatabases failed: {

OperationTime: Timestamp (1536854312, 1)

"ok": 0

"errmsg": "not master and slaveOk=false"

"code": 13435

"codeName": "NotMasterNoSlaveOk"

"$clusterTime": {

ClusterTime: Timestamp (1536854312, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}:

_ getErrorWithCode@src/mongo/shell/utils.js:25:13

Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1

ShellHelper.show@src/mongo/shell/utils.js:849:19

ShellHelper@src/mongo/shell/utils.js:739:15

@ (shellhelp2): 1:1

Use the following command to view information on the slave node

Yang:SECONDARY > rs.slaveOk ()

Yang:SECONDARY > show dbs

Admin 0.000GB

Config 0.000GB

Local 0.000GB

Mood 0.000GB

The quorum node does not copy the replication information in the primary node.

You can see that there are only two nodes and data logs, and there are no 27020 arbitration nodes.

Yang:SECONDARY > rs.printSlaveReplicationInfo ()

Source: 192.168.137.11:27017

SyncedTo: Fri Sep 14 2018 00:03:52 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Source: 192.168.137.11:27019

SyncedTo: Fri Sep 14 2018 00:03:52 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Get help with replication sets

Yang:SECONDARY > rs.help () / / get the help commands related to the copy set

Rs.status () {replSetGetStatus: 1} checks repl set status

Rs.initiate () {replSetInitiate: null} initiates set with default settings

Rs.initiate (cfg) {replSetInitiate: cfg} initiates set with configuration cfg

Rs.conf () get the current configuration object from local.system.replset

Rs.reconfig (cfg) updates the configuration of a running replica set with cfg (disconnects)

Rs.add (hostportstr) add a new member to the set with default attributes (disconnects)

Rs.add (membercfgobj) add a new member to the set with extra attributes (disconnects)

Rs.addArb (hostportstr) add a new member which is arbiterOnly:true (disconnects)

Rs.stepDown ([stepdownSecs, catchUpSecs]) stepdown as primary (disconnects)

Rs.syncFrom (hostportstr) make a secondary sync from the given member

Rs.freeze (secs) make a node ineligible to become primary for the time specified

Rs.remove (hostportstr) remove a host from the replica set (disconnects)

Rs.slaveOk () allow queries on secondary nodes

Rs.printReplicationInfo () check oplog size and time range

Rs.printSlaveReplicationInfo () check replica set members and replication lag

Db.isMaster () check who is primary

Reconfiguration helpers disconnect from the database so the shell will display

An error, even if the command succeeds.

Summarize the attributes of each node:

Arbiter

The Arbiter node only participates in voting, cannot be selected as Primary, and does not synchronize data from Primary.

For example, if you deploy a replication set with 2 nodes and 1 Primary,1 and Secondary, and any node is down, the replication set will not be able to provide services (Primary cannot be selected). In this case, you can add an Arbiter node to the replication set. Even if the node is down, you can still select Primary.

Arbiter itself does not store data and is a very lightweight service. When the members of the replication set are even, it is best to add an Arbiter node to improve the availability of the replication set.

Priority0

The Priority0 node has an election priority of 0 and will not be elected as Primary

For example, if you deploy a replica set across data center An and B, and you want to specify that the Primary must be in computer room A, you can set the replica set member Priority of computer room B to 0, so that Primary must be a member of computer room A. (note: if deployed in this way, it is best to deploy "most" nodes in computer room A, otherwise the Primary may not be selected when the network is partitioned)

Vote0

In Mongodb 3.0, the maximum number of replica set members is 50, the maximum number of members voting in the Primary election is 7, and the vote property of other members (Vote0) must be set to 0, that is, do not vote.

Hidden

The Hidden node cannot be selected as primary (Priority is 0) and is not visible to Driver.

Because the Hidden node will not accept the Driver request, you can use the Hidden node to do some data backup and offline computing tasks, which will not affect the service of the replication set.

Delayed

The Delayed node must be a Hidden node and its data lags behind Primary for a period of time (configurable, for example, 1 hour).

Because the data of Delayed node lags behind Primary for a period of time, when incorrect or invalid data is written into Primary, it can be recovered to the previous time point through the data of Delayed node.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report