Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use oplog to recover time points in mongodb

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

How to use oplog recovery time in mongodb, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.

1. First create the hezi collection and insert 10000 pieces of data

MongoDB Enterprise liuhe_rs:PRIMARY > use liuwenhe

MongoDB Enterprise liuhe_rs:PRIMARY > for (var I = 0; I

< 100000; i++) { db.hezi.insert({id: i}); } MongoDB Enterprise liuhe_rs:PRIMARY>

Db.hezi.count ()

100000

two。 Performing a backup operation, using the parameter-- oplog, will generate an oplog.bson file under the backup path

[mongod@beijing-fuli-hadoop-01 backup] $mongodump-h 10.9.21.179-u liuwenhe-p

Liuwenhe-- authenticationDatabase admin-- oplog-o / data/mongodb/backup/

Suppose the backup start time is An and the end time is B

3. After the backup is completed, if the business insert the data again at this time:

MongoDB Enterprise liuhe_rs:PRIMARY > db.hezi.insert ({id: 100001})

WriteResult ({"nInserted": 1})

MongoDB Enterprise liuhe_rs:PRIMARY > db.hezi.count ()

100001

4. Simulate the erroneous deletion operation at time C

MongoDB Enterprise liuhe_rs:PRIMARY > db.hezi.remove ({})

WriteResult ({"nRemoved": 100001})

MongoDB Enterprise liuhe_rs:PRIMARY > db.hezi.count ()

0

5. So how to restore this table?

In this case, you should stop the business first to avoid overwriting the oprlog due to the large volume of business, because if

If oprlog is overwritten, you will not be able to recover the db.hezi.insert ({id: 100001}) data because this data

Is the data that is inserted after your backup is complete, so you need to collect the oplog.rs to dump immediately so that it can be done.

Interpoint recovery

Therefore, there are two situations as follows:

Situation 1:

The data of oplog.rs during the period from the time point A when your backup started to the time point C when you mistakenly deleted it are not overwritten.

Then you will certainly be able to recover all the data! Directly replace your previous full backup with the oplog.rs you backed up later.

Oplog.bson file!

Case 2: (divided into two cases)

If the oplog.rs data of the time from the start of the backup A to the time point C that you mistakenly deleted is overwritten

Then you can only restore your full knowledge, and then find out as much about this collection as possible from oplog.rs.

Operation, and then application, whether all the data can be recovered depends on luck. Here are two more situations:

Case 1: if the oplog.rs is not overwritten during the period from backup end B to erroneous deletion C, then all the data can be recovered.

Case 2: if the oplog.rs between the end of backup B and erroneous deletion C is overwritten, you can only expect the operation on this collection.

The oplog is not overwritten so that all the data can be recovered!

How to tell whether it is covered or not?

First of all, on the machine you backed up, you can see the start and end time of the oplogs, of course, this is not necessarily accurate.

MongoDB Enterprise liuhe_rs:PRIMARY > rs.printReplicationInfo ()

Configured oplog size: 51200MB

Log length start to end: 1299939secs (361.09hrs)

Oplog first event time: Sat Nov 16 2019 16:53:17 GMT+0800 (CST)

Oplog last event time: Sun Dec 01 2019 17:58:56 GMT+0800 (CST)

Now: Sun Dec 01 2019 17:59:02 GMT+0800 (CST)

It is best to judge from the backup file oplog.rs.bson of your separate backup collection oplog.rs:

Format the file oplog.rs.bson so that you can view the time

[mongod@beijing-fuli-hadoop-04 local] $bsondump oplog.rs.bson > 12

Check the start time of the oplog in the backup

[mongod@beijing-fuli-hadoop-04 local] $cat 12 | head

{"ts": {"$timestamp": {"t": 1575040546, "I": 1}}, "t": {"$numberLong": "7"}, "h": {"$numberLong": "- 837828538757564165709"}, "v": 2, "op": "n", "ns": "", "wall": {"$date": "2019-11-29T15:15:46.661Z"}, "o": {"msg": "Reconfig set", "version": 26}}

Check the end time of the oplog in the backup

[mongod@beijing-fuli-hadoop-04 local] $cat 12 | tail-n 1

{"ts": {"$timestamp": {"t": 1575040546, "I": 1}}, "t": {"$numberLong": "7"}, "h": {"$numberLong": "- 837828538757564165709"}, "v": 2, "op": "n", "ns": "", "wall": {"$date": "2019-11-29T15:15:46.661Z"}, "o": {"msg": "Reconfig set", "version": 26}}

In the same way, check the start and end times in the oplog.bson generated by the full backup! You can tell whether it has been covered or not!

Case 1:

1) dump output the data of the collection oplog.rs

[mongod@beijing-fuli-hadoop-01 local] $mongodump-h 10.9.21.179-u liuwenhe-p liuwenhe-- authenticationDatabase admin-d local-c oplog.rs-o / data/mongodb/liuwenhe/

[mongod@beijing-fuli-hadoop-01 local] $pwd

/ data/mongodb/liuwenhe/local

[mongod@beijing-fuli-hadoop-01 local] $ll

Total 60308

-rw-rw-r-- 1 mongod mongod 61751065 Nov 29 19:42 oplog.rs.bson

-rw-rw-r-- 1 mongod mongod 124 Nov 29 19:42 oplog.rs.metadata.json

2) then find the start time to delete the hezi collection. When you delete a collection, it is deleted one by one in the oplogs! You can use bsondump formatting to see this!

[mongod@beijing-fuli-hadoop-01 local] $bsondump oplog.rs.bson | grep "\" op\ ":\" d\ "" | grep liuwenhe.hezi | head

{"ts": {"$timestamp": {"t": 1575025894, "I": 1}}, "t": {"$numberLong": "6"}, "h": {"$numberLong": "2211936654694340159"}, "v": 2, "op": "d", "ns": "liuwenhe.hezi", "ui": {"$binary": "GG4MuSZBQpm4anq5TBp00Q==", "$type": "04"}, "wall": {"$date": "2019-11-29T11:11:34.499Z"} "o": {"_ id": {"$oid": "5de0fb7cb54dce214bb40c7b"}

3) the previously backed up oplog.rs.bson file needs to be replaced with the oplog.bson file in the fully backed up file.

The oplog.rs.bson here is actually the oplog.bson we need. So rename it and put it in the right place!

[mongod@beijing-fuli-hadoop-03 / data/mongodb/backup/backup] $rm-f oplog.bson

[mongod@beijing-fuli-hadoop-03 / data/mongodb/backup/backup] $mv oplog.rs.bson oplog.bson

Where oplog.rs.bson is the oplog.rs you backed up separately, and oplog.bson is the file generated by oplog with the parameter-oplog in your full backup.

4) restore the previous full backup on another idle instance:

First, change the backup from 21.114copy to a specific directory in 21.115:

Scp-r backup/ mongod@10.9.21.115:/data/mongodb/backup/

Then restore:

[mongod@beijing-fuli-hadoop-03 / data] $mongorestore-h 10.9.21.115-u liuwenhe-p liuwenhe-- oplogReplay-- oplogLimit "1575025894 liuwenhe 1"-- authenticationDatabase admin-- dir / data/mongodb/backup/backup/

Where 1575025894 is the "t" in $timestamp and 1 is the "I" in $timestamp. After this configuration, oplog will

Before replaying to this point in time, the first delete statement and its subsequent operation are avoided, and the database stays in disaster.

Pre-state

Verify that it has indeed been restored:

MongoDB Enterprise > db.hezi.count ()

100000

5) restore the recovered data to production:

Back up on 21.115:

Mongodump-h 10.9.21.115-u liuwenhe-p liuwenhe-dliuwenhe-dliuwenhe-c hezi-- authenticationDatabase admin-o / data/mongodb/

Restore directly on 21.115, provided that the network is connected, if not, first copy the backup files to production. Then restore:

[mongod@beijing-fuli-hadoop-04 li] $mongorestore-h 10.9.21.114-u liuwenhe-p liuwenhe-d liuwenhe-c hehe-- noIndexRestore-- authenticationDatabase admin-- dir / data/mongodb/backup/li/hezi.bson

The first case in case 2:

If the oplog.rs during the period from backup end B to erroneous deletion C is not overwritten, then all the data can be recovered.

The recovery process is as follows:

1. Restore consistent full backup:

[mongod@beijing-fuli-hadoop-03 / data] $mongorestore-h 10.9.21.115-u liuwenhe-p liuwenhe-- oplogReplay-- authenticationDatabase admin-- dir / data/mongodb/backup/backup/

two。 Then continue to restore the oplog incrementally, finding the point in time to delete the hezi collection from the backed-up oplog.rs file, because opglog

It can be executed repeatedly without causing data inconsistency, so it is not necessary to choose incremental export when exporting oplog.rs

Mongorestore-h 10.9.21.115-u liuwenhe-p liuwenhe-- oplogReplay-- oplogLimit "1575025894liuwenhe"-- authenticationDatabase admin-- dir / data/mongodb/backup/backup/local/oplog.rs.bson

The second case in case 2:

If the oplog.rs between the end of backup B and erroneous deletion C is overwritten, you can only expect the operation on this collection.

The oplog is not overwritten so that all the data can be recovered! Frankly speaking, under the circumstances, there is no way to confirm whether or not

Whether the oplogs of the operation of this set has been overwritten or not, we can only restore it first. The process is the same as the first case in case 2.

To sum up:

The process of mongodb's point-in-time recovery is similar to that of mysql with binlog, but the difference is that

Mysql needs to find specific gtid points and recover incrementally, but the oplog of mongodb can be executed multiple times, which makes

Mongodb is easier to operate with the help of oprlog

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report