Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MongoDB Daily Operation and maintenance-04 replica set Construction

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

MongoDB Daily Operation and maintenance-04 replica set Construction

One: summary of common commands in MongoDB

Two: MongoDB installation

Three: MongoDB master-slave replication building

Four: MongoDB replica set building

Five: MongoDB replica set failover

6: summary of errors in building MongoDB replica set

Reference:

Https://blog.csdn.net/qq_39329616/article/details/89409728

Four: MongoDB replica set building

In order to solve the problem of automatic failover of master-slave replication.

Without a fixed master node, the cluster will automatically elect a master node, and when the master node does not work properly, it will elect another node as the master node.

There will always be one primary node and one (multiple) backup node in the replica set, and when something goes wrong with the primary node, the backup node will be promoted to the primary node.

In the replica set, there will be an arbitration node that only participates in voting and does not copy data, which is used to decide when the number of votes is consistent. It is officially recommended that the number of cluster nodes is odd.

How the replica set works

1.oplog (operation log)

To record data change operations (update insert), oplog is a fixed collection located in the local database of each replication node.

The new operation replaces the old action to ensure that the oplog does not exceed the preset size, and each document in the oplog represents an action performed on the primary node.

two。 Data synchronization

Each oplog has a timestamp, which is used by all slave nodes to track the record of their last write operation.

When a slave node is ready to update itself, it does three things:

First, check the last timestamp in your oplog

Second, query all documents in the primary node oplog that are greater than this timestamp

Finally, apply those documents to your own library and add write documents to your oplog.

3. Replication status and local database

The document of the replication status is recorded in the local database local.

The contents of the local database of the master node are not replicated by the slave node.

If you have a document that you don't want to be copied from the node, you can put it in the local database local.

4. Blocking replication

When the master node writes too fast, the update status of the slave node may not be able to keep up.

To avoid this situation:

Make the oplog of the primary node large enough

Blocking replication: use the getLastError command plus the parameter "w" on the primary node to ensure data synchronization. The larger the w, the slower the write.

5. Heartbeat mechanism

Heartbeat detection helps to detect faults for automatic election and failover. By default, replica set members ping other members every two minutes to check their health.

If a slave node fails, it will only wait for the slave node to come online again, while if the master node fails, the replica set will start to elect and re-elect a new master node, and the original master node will be downgraded to a slave node.

6. Election mechanism

The master node is selected according to the priority and Bully algorithm (judging whose data is up-to-date). Before the master node is elected, the entire cluster service is read-only and cannot perform write operations.

All non-arbitration nodes will have a priority configuration in the range of 0,100. The higher the value, the more priority to become the primary node. The default is 1. If it is 0, it cannot be the primary node.

Eligible slave nodes send requests to other nodes, while other nodes judge three conditions after receiving the election proposal:

1 whether any other node in the replica set is already the master node

2 whether your own data is newer than that on the node that is requested to become the primary node

3 whether the data of other nodes in the replica set is newer than that of the node requesting to become the primary node

As long as there is a condition, it will be considered that the other party's proposal is not feasible. The requestor will withdraw from the election as long as it receives that the return of any node is inappropriate.

The election mechanism will make the high-priority node become the primary node, and even if the low-priority node is elected first, it will at least run as the primary node for a short period of time.

The replica set will continue to issue elections until the node with the highest priority becomes the primary node.

7. Data rollback

After the slave node becomes the master node, it is considered to be the latest data in the replica set, and the operations on other nodes are rolled back, that is, all nodes connect to the new master node to resynchronize.

These nodes will check their oplog, find out the actions that have not been performed in the new master node, and then request the operation document to replace their exception samples.

Replica set building

Main library: 192.168.2.222 cjcos

Slave library: 192.168.2.187 rac1

Arbitration: 192.168.2.188 rac2

1 add configuration files for three nodes

[root@cjcos conf] # pwd

/ usr/local/mongodb/conf

[root@cjcos conf] # vim mongodb.conf

# Cluster name

ReplSet=cjcmonset

2 start the database

[root@cjcos conf] # mongod-- config / usr/local/mongodb/conf/mongodb.conf

[root@rac1 conf] # mongod-- config / usr/local/mongodb/conf/mongodb.conf

[root@rac2 conf] # mongod-- config / usr/local/mongodb/conf/mongodb.conf

3 configure copy set

3.1Configuring Primary primary

> use admin

Switched to db admin

> config= {_ id: "cjcmonset", members:

[{_ id:0,host: "192.168.2.222 27017", priority:1}

{_ id:1,host: "192.168.2.187VR 27017", priority:1}

{_ id:2,host: "192.168.2.188 id:2,host 27017", priority:1,arbiterOnly:true}]}

{

"_ id": "cjcmonset"

"members": [

{

"_ id": 0

"host": "192.168.2.222 purl 27017"

"priority": 1

}

{

"_ id": 1

"host": "192.168.2.187purl 27017"

"priority": 1

}

{

"_ id": 2

"host": "192.168.2.188 purl 27017"

"priority": 1

"arbiterOnly": true

}

]

}

Initialize configuration

> rs.initiate (config)

{

"ok": 1

"$clusterTime": {

ClusterTime: Timestamp (1584862345, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

OperationTime: Timestamp (1584862345, 1)

}

View cluster configuration

Cjcmonset:PRIMARY > rs.config ()

{

"_ id": "cjcmonset"

"version": 1

"protocolVersion": NumberLong (1)

"writeConcernMajorityJournalDefault": true

"members": [

{

"_ id": 0

"host": "192.168.2.222 purl 27017"

"arbiterOnly": false

"buildIndexes": true

"hidden": false

"priority": 1

"tags": {

}

"slaveDelay": NumberLong (0)

"votes": 1

}

{

"_ id": 1

"host": "192.168.2.187purl 27017"

"arbiterOnly": false

"buildIndexes": true

"hidden": false

"priority": 1

"tags": {

}

"slaveDelay": NumberLong (0)

"votes": 1

}

{

"_ id": 2

"host": "192.168.2.188 purl 27017"

"arbiterOnly": true

"buildIndexes": true

"hidden": false

"priority": 0

"tags": {

}

"slaveDelay": NumberLong (0)

"votes": 1

}

]

"settings": {

"chainingAllowed": true

"heartbeatIntervalMillis": 2000

"heartbeatTimeoutSecs": 10

"electionTimeoutMillis": 10000

"catchUpTimeoutMillis":-1

"catchUpTakeoverDelayMillis": 30000

"getLastErrorModes": {

}

"getLastErrorDefaults": {

"w": 1

"wtimeout": 0

}

"replicaSetId": ObjectId ("5e77148837ae69b4ab9b4870")

}

}

View status:

Cjcmonset:PRIMARY > rs.status ()

{

"set": "cjcmonset"

Date: ISODate ("2020-03-22T07:34:18.866Z")

"myState": 1

"term": NumberLong (1)

"syncingTo":

"syncSourceHost":

"syncSourceId":-1

"heartbeatIntervalMillis": NumberLong (2000)

"majorityVoteCount": 2

"writeMajorityCount": 2

"optimes": {

"lastCommittedOpTime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

LastCommittedWallTime: ISODate ("2020-03-22T07:34:16.862Z")

"readConcernMajorityOpTime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

ReadConcernMajorityWallTime: ISODate ("2020-03-22T07:34:16.862Z")

"appliedOpTime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

"durableOpTime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

LastAppliedWallTime: ISODate ("2020-03-22T07:34:16.862Z")

LastDurableWallTime: ISODate ("2020-03-22T07:34:16.862Z")

}

LastStableRecoveryTimestamp: Timestamp (1584862416, 1)

LastStableCheckpointTimestamp: Timestamp (1584862416, 1)

"electionCandidateMetrics": {

"lastElectionReason": "electionTimeout"

LastElectionDate: ISODate ("2020-03-22T07:32:35.618Z")

"electionTerm": NumberLong (1)

"lastCommittedOpTimeAtElection": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

"lastSeenOpTimeAtElection": {

Ts: Timestamp (1584862345, 1)

"t": NumberLong (- 1)

}

"numVotesNeeded": 2

"priorityAtElection": 1

"electionTimeoutMillis": NumberLong (10000)

"numCatchUpOps": NumberLong (0)

NewTermStartDate: ISODate ("2020-03-22T07:32:36.851Z")

WMajorityWriteAvailabilityDate: ISODate ("2020-03-22T07:32:37.889Z")

}

"members": [

{

"_ id": 0

"name": "192.168.2.222 purl 27017"

"health": 1

"state": 1

"stateStr": "PRIMARY"

"uptime": 211

"optime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2020-03-22T07:34:16Z")

"syncingTo":

"syncSourceHost":

"syncSourceId":-1

"infoMessage": "could not find member to sync from"

ElectionTime: Timestamp (1584862355, 1)

ElectionDate: ISODate ("2020-03-22T07:32:35Z")

"configVersion": 1

"self": true

"lastHeartbeatMessage":

}

{

"_ id": 1

"name": "192.168.2.187purl 27017"

"health": 1

"state": 2

"stateStr": "SECONDARY"

"uptime": 113

"optime": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

"optimeDurable": {

Ts: Timestamp (1584862456, 1)

"t": NumberLong (1)

}

OptimeDate: ISODate ("2020-03-22T07:34:16Z")

OptimeDurableDate: ISODate ("2020-03-22T07:34:16Z")

LastHeartbeat: ISODate ("2020-03-22T07:34:17.751Z")

LastHeartbeatRecv: ISODate ("2020-03-22T07:34:18.157Z")

"pingMs": NumberLong (0)

"lastHeartbeatMessage":

"syncingTo": "192.168.2.222 purl 27017"

"syncSourceHost": "192.168.2.222 purl 27017"

"syncSourceId": 0

"infoMessage":

"configVersion": 1

}

{

"_ id": 2

"name": "192.168.2.188 purl 27017"

"health": 1

"state": 7

"stateStr": "ARBITER"

"uptime": 113

LastHeartbeat: ISODate ("2020-03-22T07:34:17.750Z")

LastHeartbeatRecv: ISODate ("2020-03-22T07:34:17.948Z")

"pingMs": NumberLong (0)

"lastHeartbeatMessage":

"syncingTo":

"syncSourceHost":

"syncSourceId":-1

"infoMessage":

"configVersion": 1

}

]

"ok": 1

"$clusterTime": {

ClusterTime: Timestamp (1584862456, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

OperationTime: Timestamp (1584862456, 1)

}

Data synchronization test

The main library creates the test database cjcdb, creates the test table T01 and inserts the data

Cjcmonset:PRIMARY > use cjcdb

Switched to db cjcdb

Cjcmonset:PRIMARY > show collections

Cjcmonset:PRIMARY > db.createCollection ("T01")

{

"ok": 1

"$clusterTime": {

ClusterTime: Timestamp (1584880911, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

OperationTime: Timestamp (1584880911, 1)

}

Cjcmonset:PRIMARY > db.t01.insert ({"tname": "cjc"})

WriteResult ({"nInserted": 1})

Cjcmonset:PRIMARY > db.t01.find ()

{"_ id": ObjectId ("5e775d3d1cf1e6a03a41253c"), "tname": "cjc"}

View the database

Cjcmonset:PRIMARY > show dbs

Admin 0.000GB

Cjcdb 0.000GB

Config 0.000GB

Local 0.000GB

View data from the library

187 from the library:

Cjcmonset:SECONDARY > show dbs

2020-03-22T20:43:47.784+0800 E QUERY [js] uncaught exception: Error: listDatabases failed: {

OperationTime: Timestamp (1584881019, 1)

"ok": 0

"errmsg": "not master and slaveOk=false"

"code": 13435

"codeName": "NotMasterNoSlaveOk"

"$clusterTime": {

ClusterTime: Timestamp (1584881019, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

}:

_ getErrorWithCode@src/mongo/shell/utils.js:25:13

Mongo.prototype.getDBs/ rs.slaveOk ()

Cjcmonset:SECONDARY > show dbs

Admin 0.000GB

Cjcdb 0.000GB

Config 0.000GB

Local 0.000GB

Cjcmonset:SECONDARY > use cjcdb

Switched to db cjcdb

Cjcmonset:SECONDARY > db.t01.find ()

{"_ id": ObjectId ("5e775d3d1cf1e6a03a41253c"), "tname": "cjc"}

The main library log is as follows:

2020-03-22T20:41:51.247+0800 I STORAGE [conn1] createCollection: cjcdb.t01 with generated UUID: 7dd8d050-ac8b-4c6d-b46d-43a8c54b74a2 and options: {}

2020-03-22T20:41:51.541+0800 I INDEX [conn1] index build: done building index _ id_ on ns cjcdb.t01

2020-03-22T20:41:51.542+0800 I COMMAND [conn1] command cjcdb.t01 appName: "MongoDB Shell" command: create {create: "T01", lsid: {id: UUID ("3383ae30-677c-4fce-b244-162342b1a28e")}, $clusterTime: {clusterTime: Timestamp (1584880879, 1), signature: {hash: BinData (0,000000000000000000000000000000000000000000), $db: "cjcdb"} numYields:0 reslen:163 locks: {ParallelBatchWriterMode: {acquireCount: {r: 2}} ReplicationStateTransition: {acquireCount: {w: 2}}, Global: {acquireCount: {w: 2}}, Database: {acquireCount: {w: 1, W: 1}}, Collection: {acquireCount: {r: 2, W: 1}}, Mutex: {acquireCount: {r: 1} flowControl: {acquireCount: 2} storage: {} protocol:op_msg 294ms

2020-03-22T20:42:37.293+0800 I SHARDING [conn1] Marking collection cjcdb.t01 as collection version:

188 arbitration node

When you enter the arbitrator node, you will find that you cannot read or write, because he is only responsible for voting.

Cjcmonset:ARBITER > rs.slaveOk ()

Cjcmonset:ARBITER > show dbs

Local 0.000GB

Welcome to follow my Wechat official account "IT Little Chen" and learn and grow together!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report