Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement backup Compression with MongoDB

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Background and principle

Database backup is the last barrier of disaster recovery, no matter what type of database needs to be set up, and MongoDB is no exception. After MongoDB 3.0, the database can use the Wiredtiger storage engine (the default in version 3.2). In this environment, the backup files produced by mongodump backup are much larger than the size of the data storage files. In addition, the amount of data stored by MongoDB is relatively large, and the backup files are also relatively large, which takes up a lot of disk space. Therefore, it is necessary to study how to achieve MongoDB backup compression.

The image above shows the execution of the command db.stats () to view the information of a database.

The backup file size is generally the size of dataSize, so we want to compress the backup, which can reach storageSize or less.

The general backup idea is to back up first, and then compress the backup files. Previously, we used this approach, for example, the main compression commands are as follows

Tar-cf-${targetpath} / ${nowtime} | pigz-p 10 > ${targetpath} / ${nowtime} .tgz

(command interpretation: targetpath} / ${nowtime is the backup file to be compressed; pigz is the Linux compression artifact that can be compressed in parallel;-p is the core number of the specified cpu. )

However, in this way, it is easy to form disk performance pressure and space pressure in the process of generating backup files. The following figure shows the change of disk free space caused by backup before compression on one of our Server.

What you really want is to compress the backup at the same time, so that the available space will be more stable. A compressed backup has been introduced in MongoDB 3.2 [this mongodb version must be no less than 3.2]. You can use gzip for compression. This is achieved by introducing a new instruction line option "--gzip" in mongodump and mongorestore.

Compression can be used for backups created under the directory and archive model, and compression can also reduce disk space usage.

test

Test environment:

Test server

Test database

Port

File path

172.X.X.245

Complete examples

17219

/ data/mongodb_back

172.X.X.246

QQ_DingDing

17218

/ data/mongodb_back/QQ_DingDing

Commands for Step 1 compressed backups:

. / mongodump-- host 172.X.X.245-- port 17219-u username-p "password"-gzip-- authenticationDatabase "admin"-- out / data/mongodb_back

File size after backup, 97m

At this point, the format of the backup file is changed to .gz format.

Step 2 copy the backup files to the remote machine for restore:

The following command is required to copy the file from X.245 to local in 172.X.X.246

Scp-r root@172.X.X.245:/data/mongodb_back/QQ_DingDing

Step 3 executes the restore command

Orders executed

. / mongorestore-- host 172.X.X.246-- port 17218-d QQ_DingDing-u username-p "password"-gzip-- authenticationDatabase "admin" / data/mongodb_back/QQ_DingDing

Log in to MongoDB after restore, execute show dbs, and see that the data size is 500m at this time.

Supplementary explanation

(1) if compressed backup is not used, how large will the file after backup be? Backup command:

. / mongodump-- host 172.X.X.245-- port 17219-u username-p "password"-authenticationDatabase "admin"-out / data/mongodb_back2

Check the file size backed up by this method-1.5g.

Taking the QQ_DingDing database as an example, the compression ratio (the ratio of the size of the file after compression to the size before compression) is: 97MB, 1.5GB, 97max, 1536cm, 6.3%.

(2) will this compressed backup bring some disadvantages, such as the increase of backup time? (increased recovery time? Please test yourself, hee @ @)

Take the instance where an archive backup database is located as an example (storageSize 150G data size 600G)

It takes 1 hour and 55 minutes to back up first and then compress

It takes 2 hours and 33 minutes to use compressed backup (specify-- gzip parameter)

The backup files produced are basically the same size, while the backup files produced by the compressed backup mode are slightly smaller.

So compressed backup can lead to an increase in backup time.

However, from the perspective of space usage, we still recommend that you use compressed backup, which has a very high compression ratio (6.3% of the test case).

Attachment: regular cleaning, keeping a record of 7 days

#! / bin/bashtargetpath='/backup/mongobak'nowtime=$ (date-d'- 7 days' "+% Y%m%d") if [- d "${targetpath} / ${nowtime} /"] thenrm-rf "${targetpath} / ${nowtime} /" echo "= ${targetpath} / ${nowtime} / = = deletion completed =" fiecho "= = $nowtime= ="

Summary

The above is the whole content of this article, I hope that the content of this article has a certain reference and learning value for your study or work, if you have any questions, you can leave a message and exchange, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report