Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of the principle and Management of MongoDB replica set Election

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The nodes of the MongoDB replication set are elected to the master node. The process of election between the replication set nodes is described below. The principle of MongoDB replication is based on the operation log oplog, which is equivalent to the binary log in MySQL. Only records that have changed are recorded. Replication is to synchronize the oplog log of the master node and apply it to other slave nodes. The principle node type of MongoDB election is divided into host node, passive node and arbiter node. (1) only the standard node may be elected as the primary (primary) node and have the right to vote; the passive node has a complete copy and can only be saved as a replica set, so it is impossible to become the master node and has no right to vote. Arbitration node does not store data, only responsible for voting, can not become the master node, does not store data, still does not have the right to vote (2) the difference between the standard node and the passive node: the high priority value is the standard node, the low one is the passive node. (3) the election rule is that the one with the highest number of votes wins, and the priority is the value with priority of 0such 1000, which is equivalent to adding 0,000additional votes. Election result: the person with the highest number of votes wins; if the number of votes is the same, the data newcomer wins the inter-node election of the MongoDB replica set as shown in the figure.

The following is an example to demonstrate the principle of election between nodes of MongoDB replication sets. Install Mongodb online using yum on a CentOS7 host, and create multiple instances to deploy MongoDB replication sets. First configure the network YUM source, and baseurl (download path) is designated as the yum repository provided by the mongodb official website.

Vim / etc/yum.repos.d/mongodb.repo

[mongodb-org]

Name=MongoDB Repository

Baseurl= https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/ # specifies the path to get the download

Gpgcheck=1 # means to verify the rpm package downloaded from this source

Enabled=1 # means to enable this source.

Gpgkey= https://www.mongodb.org/static/pgp/server-3.6.asc

Reload the yum source and use the yum command to download and install mongodb

Yum list

Yum-y install mongodb-org

Prepare 4 instances, set up two standard nodes, a passive node and an arbitration node to create data file and log file storage paths, and grant permissions

[root@localhost] # mkdir-p / data/mongodb {2pm 3pm 4}

[root@localhost ~] # mkdir / data/logs

[root@localhost ~] # touch / data/logs/mongodb {2pm 3pm 4} .log

[root@localhost ~] # chmod 777 / data/logs/mongodb*

[root@localhost ~] # ll / data/logs/

Total dosage 0

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb2.log

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb3.log

-rwxrwxrwx. 1 root root 0 September 15 22:31 mongodb4.log

Edit the configuration file of 4 MongoDB instances first edit the configuration file / etc/mongod.conf of the default instance installed by yum, specify the listening IP, default port is 27017, enable replication parameter configuration, replSetName:true (Custom)

[root@localhost ~] # vim / etc/mongod.conf

# mongod.conf

# for documentation of all options, see:

# http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.

SystemLog:

Destination: file

LogAppend: true

Path: / var/log/mongodb/mongod.log

# Where and how to store data.

Storage:

DbPath: / var/lib/mongo

Journal:

Enabled: true

# engine:

# mmapv1:

# wiredTiger:

# how the process runs

ProcessManagement:

Fork: true # fork and run in background

PidFilePath: / var/run/mongodb/mongod.pid # location of pidfile

TimeZoneInfo: / usr/share/zoneinfo

# network interfaces

Net:

Port: 27017 # default port

BindIp: 0.0.0.0 # listen for any address

# security:

# operationProfiling:

Replication: # remove the previous "#" comment and turn on the parameter setting

ReplSetName: true # sets the replication set name

Copy the configuration file to the other instance, and configure the port parameter in mongodb2.conf to 27018mongod3.conf. The port parameter in 27019 mongod4.conf is configured to 27020. Also modify the dbpath and logpath parameters to the corresponding path values

Cp / etc/mongod.conf / etc/mongod2.conf

Cp / etc/mongod2.conf / etc/mongod3.conf

Cp / etc/mongod2.conf / etc/mongod4.conf

Mongodb2.conf modification of the configuration file of instance 2

Vim / etc/mongod2.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb2.log

Storage:

DbPath: / data/mongodb/mongodb2

Journal:

Enabled: true

Port: 27018

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

The configuration file mongodb3.conf of instance 3 is modified

Vim / etc/mongod3.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb3.log

Storage:

DbPath: / data/mongodb/mongodb3

Journal:

Enabled: true

Port: 27019

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

The configuration file mongodb4.conf of example 4 is modified.

Vim / etc/mongod4.conf

SystemLog:

Destination: file

LogAppend: true

Path: / data/logs/mongodb4.log

Storage:

DbPath: / data/mongodb/mongodb4

Journal:

Enabled: true

Port: 27020

BindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

# security:

# operationProfiling:

Replication:

ReplSetName: true

Start each instance of mongodb

[root@localhost] # mongod-f / etc/mongod.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93576

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod2.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93608

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod3.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93636

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod4.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 93664

Child process started successfully, parent exiting

[root@localhost ~] # netstat-antp | grep mongod / / View the status of the mongodb process

Tcp 0 0 0.0.0 0 27019 0.0.0 0 V * LISTEN 93636/mongod

Tcp 0 0 0.0.0 0 27020 0.0.0 0 15 * LISTEN 93664/mongod

Tcp 0 0 0.0.0 0 27017 0.0.0 0 15 * LISTEN 93576/mongod

Tcp 0 0 0.0.0 0 27018 0.0.0 0 15 * LISTEN 93608/mongod

Configure the priority of the replication set to log in to the default instance mongo, configure 4 nodes MongoDB replication set, set two standard nodes, one passive node and one arbitration node, and determine the node according to the priority: the one with priority 100 is the standard node, the port number is 27017 and 27018, the priority is 0, the port number is 27019, and the arbitration node is 27020

[root@localhost ~] # mongo

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27017

MongoDB server version: 3.6.7

> cfg= {"_ id": "true", "members": [{"_ id": 0, "host": "192.168.195.137RM 27017", "priority": 100}

{"_ id": 1, "host": "192.168.195.137 priority 27018", "priority": 100}, {"_ id": 2, "host": "192.168.195.137VR 27019", "priority": 0}, {"_ id": 3, "host": "192.168.195.137 priority 27020", "arbiterOnly": true}]}

{

"_ id": "true"

"members": [

{

"_ id": 0

"host": "192.168.195.137 27017", # Standard Node 1, priority is 100

"priority":

}

{

"_ id": 1

"host": "192.168.195.137 27018", # Standard Node 2, priority is 100

"priority":

}

{

"_ id": 2

"host": "192.168.195.137 27019", # passive node, priority 0

"priority": 0

}

{

"_ id": 3

"host": "192.168.195.137VR 27020", # Arbitration Node

"arbiterOnly": true

> rs.initiate (cfg) # initialize configuration

{

"ok": 1

OperationTime: Timestamp (1537077618, 1)

"$clusterTime": {

ClusterTime: Timestamp (1537077618, 1)

"signature": {

"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")

"keyId": NumberLong (0)

}

}

Use the command rs.isMaster () to view the identity of each node

True:PRIMARY > rs.isMaster ()

{

"hosts": [

"192.168.195.137VR 27017", # Standard Node

"192.168.195.137purl 27018"

]

"passives": [

"192.168.195.137VR 27019" # passive node

]

"arbiters": [

"192.168.195.137VR 27020" # Arbitration Node

]

"setName": "true"

"setVersion": 1

"ismaster": true

"secondary": false

"primary": "192.168.195.137VR 27017"

"me": "192.168.195.137VR 27017"

Add, delete and change on the master node. Check operation

True:PRIMARY > use kfc

Switched to db kfc

True:PRIMARY > db.info.insert ({"id": 1, "name": "tom"})

WriteResult ({"nInserted": 1})

True:PRIMARY > db.info.insert ({"id": 2, "name": "jack"})

WriteResult ({"nInserted": 1})

True:PRIMARY > db.info.find ()

{"_ id": ObjectId ("5b9df3ff690f4b20fa330b18"), "id": 1, "name": "tom"}

{"_ id": ObjectId ("5b9df40f690f4b20fa330b19"), "id": 2, "name": "jack"

True:PRIMARY > db.info.update ({"id": 2}, {$set: {"name": "lucy"}})

WriteResult ({"nMatched": 1, "nUpserted": 0, "nModified": 1})

True:PRIMARY > db.info.remove ({"id": 1})

WriteResult ({"nRemoved": 1})

View the oplog log records of the primary node for all operations, and view it in oplog.rs in the default database local

True:PRIMARY > use local

Switched to db local

True:PRIMARY > show tables

Me

Oplog.rs

Replset.election

Replset.minvalid

Startup_log

System.replset

System.rollback.id

True:PRIMARY > db.oplog.rs.find () # View logging all operations

. # through logging, you can find the operation information just now

{"ts": Timestamp (1537078271, 2), "t": NumberLong (1), "h": NumberLong ("- 5529983416084904509"), "v": 2, "op": "c", "ns": "kfc.$cmd", "ui": UUID ("2de2277f-df99-4fb2-96ef-164b59dfc768"), "wall": ISODate ("201809-16T06:11:11.072Z") "o": {"create": "info", "idIndex": {"v": 2, "key": {"_ id": 1}, "name": "_ id_", "ns": "kfc.info"}

{"ts": Timestamp (1537078271, 3), "t": NumberLong (1), "h": NumberLong ("- 1436300260967761649"), "v": 2, "op": "I", "ns": "kfc.info", "ui": UUID ("2de2277f-df99-4fb2-96ef-164b59dfc768"), "wall": ISODate ("201809-16T06:11:11.072Z") "o": {"_ id": ObjectId ("5b9df3ff690f4b20fa330b18"), "id": 1, "name": "tom"}}

{"ts": Timestamp (1537078287, 1), "t": NumberLong (1), "h": NumberLong ("9052955074674132871"), "v": 2, "op": "I", "ns": "kfc.info", "ui": UUID ("2de2277f-df99-4fb2-96ef-164b59dfc768"), "wall": ISODate ("201809-16T06:11:27.562Z") "o": {"_ id": ObjectId ("5b9df40f690f4b20fa330b19"), "id": 2, "name": "jack"}}

.

{"ts": Timestamp (1537078543, 1), "t": NumberLong (1), "h": NumberLong ("- 5120962218610090442"), "v": 2, "op": "u", "ns": "kfc.info", "ui": UUID ("2de2277f-df99-4fb2-96ef-164b59dfc768"), "O2": {"_ id": ObjectId ("5b9df40f690f4b20fa330b19")} "wall": ISODate ("2018-09-16T06:15:43.494Z"), "o": {"$v": 1, "$set": {"name": "lucy"}

Simulation Standard Node 1 failure if the primary node fails, another standard node will be elected as the new primary node.

[root@localhost] # mongod-f / etc/mongod.conf-- shutdown # shuts down the primary node service

Killing process with pid: 52986

[root@localhost ~] # mongo-- port 27018 # Log in to another standard node port 27018

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27018/

MongoDB server version: 3.6.7

True:PRIMARY > rs.status () # check the status, you can see that this standard node has been elected as the master node

"members": [

{

"_ id": 0

"name": "192.168.195.137VR 27017"

"health": 0, # Health value is 0, which means that port 27017 is down.

"state": 8

"stateStr": "(not reachable/healthy)

"uptime": 0

"optime": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

"optimeDurable": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

{

"_ id": 1

"name": "192.168.195.137VR 27018"

"health": 1

"state": 1

"stateStr": "PRIMARY", # at this time another standard node is elected as the master node with port 27018

"uptime": 3192

"optime": {

Ts: Timestamp (1537080552, 1)

"t": NumberLong (2)

}

Simulate the standard node 2 failure to shut down all the standard node services and see if the passive node will be elected as the primary node

[root@localhost] # mongod-f / etc/mongod2.conf-- shutdown # shuts down the second standard node service

Killing process with pid: 53018

[root@localhost ~] # mongo-- port 27019 # enters the third passive node instance

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27019/

MongoDB server version: 3.6.7

True:SECONDARY > rs.status () # View replication set status information

.

"members": [

{

"_ id": 0

"name": "192.168.195.137VR 27017"

"health": 0

"state": 8

"stateStr": "(not reachable/healthy)

"uptime": 0

"optime": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

"optimeDurable": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

.

{

"_ id": 1

"name": "192.168.195.137VR 27018"

"health": 0

"state": 8

"stateStr": "(not reachable/healthy)

"uptime": 0

"optime": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

"optimeDurable": {

"ts": Timestamp (0,0)

"t": NumberLong (- 1)

}

.

{

"_ id": 2

"name": "192.168.195.137VR 27019"

"health": 1

"state": 2

"stateStr": "SECONDARY", # passive node has not been elected as the primary node, indicating that it is impossible for passive node to become active node

"uptime": 3972

"optime": {

Ts: Timestamp (1537081303, 1)

"t": NumberLong (2)

}

.

{

"_ id": 3

"name": "192.168.195.137VR 27020"

"health": 1

"state": 7

"stateStr": "ARBITER"

"uptime": 3722

In addition, we can artificially specify the master node by starting the standard nodes, and the one who starts first is the master node by default. The slave node that allows reading data from the default MongoDB replication set from the node cannot read the data. You can use the rs.slaveOk () command to allow two standard nodes to restart when reading data from the node.

[root@localhost] # mongod-f / etc/mongod.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 54685

Child process started successfully, parent exiting

[root@localhost] # mongod-f / etc/mongod2.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 54773

Child process started successfully, parent exiting

Enter one of the slave nodes of the replication set and configure it to allow data to be read

[root@localhost] # mongo-- port 27018

MongoDB shell version v3.6.7

Connecting to: mongodb://127.0.0.1:27018/

MongoDB server version: 3.6.7

True:SECONDARY > rs.slaveOk () # allows data to be read from nodes by default

True:SECONDARY > show dbs # read successfully

Admin 0.000GB

Config 0.000GB

Kfc 0.000GB

Local 0.000GB

To view replication status information, you can use the rs.printReplicationInfo () and rs.printSlaveReplicationInfo () commands to view replication set status

True:SECONDARY > rs.printReplicationInfo () # View the size that can be used by log files the default oplog size takes up 5% of the available disk space for 64-bit instances

Configured oplog size: 990MB

Log length start to end: 5033secs (1.4hrs)

Oplog first event time: Sun Sep 16 2018 14:00:18 GMT+0800 (CST)

Oplog last event time: Sun Sep 16 2018 15:24:11 GMT+0800 (CST)

Now: Sun Sep 16 2018 15:24:13 GMT+0800 (CST)

True:SECONDARY > rs.printSlaveReplicationInfo () # View Node

Source: 192.168.195.137:27018

SyncedTo: Sun Sep 16 2018 15:24:21 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Source: 192.168.195.137:27019

SyncedTo: Sun Sep 16 2018 15:24:21 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

You will find that the quorum node does not have the data replication change oplog size oplog, or operations log shorthand, which is stored in the local database. The new operation in oplog automatically replaces the old operation to ensure that the oplog does not exceed the default size. By default, the oplog size takes up 5% of the available disks of 64-bit instances. During MongoDB replication, the primary node applies business operations to the database, then records these operations to oplog, replicates the oplog from the node, and then applies these modifications. These operations are asynchronous. If the operation of the slave node has been lagged far behind by the master node, and the oplog log is not finished in the slave node, the oplog may already roll around. If the slave node cannot keep up with the synchronization, the replication will stop and the slave node needs to resynchronize. In order to avoid this situation, try to ensure that the oplog of the master node is large enough to store the operation record for quite a long time. (1) turn off mongodb

True:PRIMARY > use admin

Switched to db admin

True:PRIMARY > db.shutdownServer ()

(2) modify the configuration file, cancel the settings related to replication, and modify the port number in order to temporarily break away from the replication set and become an independent unit.

Vim / etc/mongod.conf

Port: 27027

# replication:

# replSetName: true

(3) start the single instance mode and back up the previous oplog

Mongod-f / etc/mongod.conf

Mongodump-- port=27028-d local-c oplog.rs-o / opt/

(4) enter the instance, delete the original oplog.rs, use the db.runCommand command to recreate the oplog.rs, and change the oplog size

[root@localhost logs] # mongo-- port 27027

> use local

> db.oplog.rs.drop ()

> db.runCommand ({create: "oplog.rs", capped: true, size: (2 * 1024 * 1024 * 1024)})

(5) close the mongodb service, change the configuration file entry back to the original setting, and add the setting oplogSizeMB: 2048

> use admin

> db.shutdownServer ()

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report