Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to add new nodes with consistent full backup and oplogs in mongodb

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how mongodb uses consistent full backup plus oplogs to add new nodes. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

Consistent backup + oplogs

If you add a new node using the direct rs.add method, you need to ensure that the oplog is not overwritten and you need to evaluate the traffic impact of the synchronous oplog

Problem, so (consistent backup + oplogs) is the way we usually add secondary nodes to horizontal replication sets

When the amount of data is large, the following methods need to be used:

Environment description

Primary node 10.9.21.114purl 27017

Two slave nodes

10.9.21.178:27017

10.9.21.179:27017

Goal: add the third slave node 10.9.21.115purl 27017 with a consistent snapshot

A brief description of the overall steps:

1) perform a consistent snapshot backup on the master node or one of the slave nodes

2) restore the consistent snapshot from the node, restore only part of the data, and do not restore the oplog for the time being

3) initialize the oplog.rs collection and restore the oplog record. The purpose of restoring the oplog record is to tell the node where to read the oplog of the main database. At the same time, if the oplog is empty, the initial sync will be initialized, and during initialization, all the original data will be deleted, so it is necessary to restore the oplog.

4) initialize the other two collections of the local database, db.replset.election (tell the current master node) and db.system.replset (store the replica set node information, and you can synchronize the master node data when added, that is, you don't have to restore it)

5) modify the database configuration and restart the database (the instance does not enable the authentication mode and the configuration of the replication set before this step)

6) use the rs.add ("HOST_NAME:PORT") command to add slave nodes to the cluster

7)。 Use rs.status () to observe the synchronization status and verify the integrity and consistency of the data

one。 When backing up data on the master node or the other two slave nodes, I chose a slave node:

First, the insert operation is performed on the master node to simulate the online business. At the same time, it can also be used as the basis for the final verification node to add normal values:

MongoDB Enterprise liuhe_rs:PRIMARY > use liuwenhe

Switched to db liuwenhe

MongoDB Enterprise liuhe_rs:PRIMARY > for (var I = 0; I

< 100000; i++) { db.hezi.insert({id: i}); } 同时执行备份操作,如下,等我备份完成,前面的插入操作还没有结束! [mongod@beijing-fuli-hadoop-04 ~]$ mongodump -h 10.9.21.179 -u liuwenhe -p liuwenhe --authenticationDatabase admin --oplog -o /data/mongodb/backup/ 二.将备份文件scp到192.168.0.3上并进行恢复: scp -r /data/mongodb/backup mongod@10.9.21.115:/data/mongodb/backup/ 三.第三个节点以单实例方式启动: 备注:需要注释掉以下副本集参数 vi /etc/mongod.conf #replication: # oplogSizeMB: 51200 # replSetName: liuhe_rs #security: # keyFile: /data/mongodb/config/mongodb.key # authorization: enabled [mongod@beijing-fuli-hadoop-03 /]$ /usr/bin/mongodb/bin/mongod -f /etc/mongod.conf about to fork child process, waiting until server is ready for connections. forked process: 60522 child process started successfully, parent exiting 四:在10.9.21.115上进行一致性快照恢复: [mongod@beijing-fuli-hadoop-03 /data/mongodb/backup/backup]$ mongorestore --oplogReplay --dir /data/mongodb/backup/backup/ 五:创建oplog.rs集合并初始化大小; MongoDB Enterprise >

Use local

Switched to db local

Note: because the local and config libraries are not backed up during the consistent full backup, and only the collection of startup_log under the local library of the newly created mongod instance is as follows:

MongoDB Enterprise > show collections

Startup_log

MongoDB Enterprise > db.createCollection ("oplog.rs", {"capped": true, "size": 100000000})

{"ok": 1}

Note: where capped:true indicates that the created collection is cyclically covered and limited in size; create a fixed collection. A fixed set is a collection of fixed size that automatically overwrites the earliest documents when the maximum is reached. Size unit is KB

Six: restore the data of the oplog.rs collection of consistent backups to 10.9.21.115:

[mongod@beijing-fuli-hadoop-03 / data/mongodb/backup/backup] $mongorestore-d local-c oplog.rs / data/mongodb/backup/backup/oplog.bson

2019-12-13T22:29:33.370+0800 checking for collection data in / data/mongodb/backup/backup/oplog.bson

2019-12-13T22:29:33.371+0800 restoring local.oplog.rs from / data/mongodb/backup/backup/oplog.bson

2019-12-13T22:29:33.433+0800 no indexes to restore

2019-12-13T22:29:33.433+0800 finished restoring local.oplog.rs (1378 documents)

2019-12-13T22:29:33.433+0800 done

Seven: you need to query the data of the primary node replset.election collection and store the data to 10.9.21.115 nodes

Actions on primary node 21.114:

MongoDB Enterprise liuhe_rs:PRIMARY > use local

Switched to db local

MongoDB Enterprise liuhe_rs:PRIMARY > db.replset.election.find ()

{"_ id": ObjectId ("5dcfb9112670e3e338d03747"), "term": NumberLong (7), "candidateIndex": NumberLong (2)}

10.9.21.115 save the data contents of the replset.election collection on the primary node (21.114):

MongoDB Enterprise > use local

Switched to db local

MongoDB Enterprise > db.replset.election.save ({"_ id": ObjectId ("5dcfb9112670e3e338d03747"), "term": NumberLong (7), "candidateIndex": NumberLong (2)})

WriteResult ({

"nMatched": 0

"nUpserted": 1

"nModified": 0

"_ id": ObjectId ("5dcfb9112670e3e338d03747")

})

Eight: close the third slave node and start mongodb as a replica set:

MongoDB Enterprise > use admin

Switched to db admin

MongoDB Enterprise > db.shutdownServer ()

2019-12-13T22:36:36.935+0800 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1 js 27017-HostUnreachable: Connection closed by peer

Server should be down...

Nine: start mongodb with a copy of the set

Modify the configuration of the third slave node and remove the comments:

Vi / etc/mongod.conf

# replication:

# oplogSizeMB: 51200

# replSetName: liuhe_rs

# security:

# keyFile: / data/mongodb/config/mongodb.key

# authorization: enabled

[mongod@beijing-fuli-hadoop-03] $/ usr/bin/mongodb/bin/mongod-f / etc/mongod.conf

About to fork child process, waiting until server is ready for connections.

Forked process: 64136

Child process started successfully, parent exiting

Ten: the master node performs the add node operation

MongoDB Enterprise liuhe_rs:PRIMARY > rs.add ("10.9.21.115purl 27017")

{

"ok": 1

OperationTime: Timestamp (1576247871, 1)

"$clusterTime": {

ClusterTime: Timestamp (1576247871, 1)

"signature": {

"hash": BinData (0, "p3g5oVNzyiHogsBYfSCpzrBpIks =")

"keyId": NumberLong ("6758082305262092289")

}

}

}

Eleven: verify whether it is successful:

1.rs.status () can see the status of the 21.115 you just joined.

MongoDB Enterprise liuhe_rs:PRIMARY > rs.status ()

2.rs.printSlaveReplicationInfo () checks the status of the replication, and you can see 21.115 of the information as follows:

MongoDB Enterprise liuhe_rs:PRIMARY > rs.printSlaveReplicationInfo ()

Source: 10.9.21.178:27017

SyncedTo Fri Dec 13 2019 22:39:59 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Source: 10.9.21.179:27017

SyncedTo Fri Dec 13 2019 22:39:59 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

Source: 10.9.21.115:27017

SyncedTo Fri Dec 13 2019 22:39:59 GMT+0800 (CST)

0 secs (0 hrs) behind the primary

3. Look at the amount of data in the collection of hezi on 32.115, which is the most critical verification, because when you make a consistent backup, there are operations at the same time. If the amount of data is the same, it also proves that there is no problem with your addition of 21.115 nodes, and the data begins to synchronize.

MongoDB Enterprise liuhe_rs:SECONDARY > rs.slaveOk ()

MongoDB Enterprise liuhe_rs:SECONDARY > use liuwenhe

Switched to db liuwenhe

MongoDB Enterprise liuhe_rs:SECONDARY > db.hezi.count ()

221323

View the amount of data in the hezi collection on 21.114:

MongoDB Enterprise liuhe_rs:PRIMARY > use liuwenhe

Switched to db liuwenhe

MongoDB Enterprise liuhe_rs:PRIMARY > db.hezi.count ()

221323

By comparison, it is found that the data are the same, indicating that the work of adding nodes is completed.

So much for sharing about how mongodb uses consistent full backup and oplogs to add new nodes. I hope the above content can be helpful to you and learn more. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report