In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how linux uses the advanced usage of rsync for large backups. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
The basic rsync command is usually sufficient to manage your Linux backups, but additional options make large backup sets faster and more powerful.
Obviously, backup has always been a hot topic in the Linux world. Back in 2017, David Both gave some advice to Opensource.com readers on using rsync to back up the Linux system. Earlier this year, he launched a questionnaire to ask everyone what is the main backup strategy for your / home directory in Linux. In another survey this year, Don Watkins asked which open source backup solution you use.
My reply is rsync. I really like rsync! There are plenty of large and complex tools on the market that may be necessary to manage tape drives or repository devices, but maybe all you need is a simple open source command line tool.
Rsync Foundation
I manage binary repositories for a global organization with about 35000 developers and dozens of TB files. I often move or archive hundreds of GB of data at a time. Rsync is used. This experience makes me full of confidence in this simple tool. So, yes, I use it at home to back up my Linux system.
The basic rsync command is simple.
Rsync-av source directory destination directory
In fact, the rsync commands taught in various guides work well in most general situations. However, suppose we need to back up a large amount of data. For example, a directory containing 2000 subdirectories, each containing data from 50GB to 700GB. Running rsync in this directory can take a lot of time, especially if you use the check option (I prefer to use it).
We may encounter performance problems when we try to synchronize large amounts of data or through slow network connections. Let me show you some of the methods I use to ensure good performance and reliability.
Advanced usage of rsync
Rsync runtime appears: "sending incremental file list." If you search for this line online, you'll see a lot of similar questions: why it keeps running, or why it seems to hang up.
Here is an example based on this scenario. Suppose we have a directory of / storage and we want to back up to an external USB disk, we can use the following command:
Rsync-cav / storage / media/WDPassport
The-c option tells rsync to use a file checksum instead of a timestamp to determine the changed file, which usually takes longer. To decompose the / storage directory, I synchronize through subdirectories, using the find command. This is an example:
Find / storage-type d-exec rsync-cav {} / media/WDPassport\
This looks fine, but if there are any files in the / storage directory, they will be skipped. So how do we synchronize the files in the / storage directory? There is also a slight difference that these options will cause rsync to synchronize. Directory, which is the source directory itself; this means that it synchronizes the directory twice, which is not what we want.
To make a long story short, my solution is a "double-incremental" script. This allows me to break down a directory, for example, when your home directory has multiple large directories, such as music or family photos, decompose the / home directory into a single user home directory.
This is an example of my script:
HOMES= "alan" DRIVE= "/ media/WDPassport" for HOME in $HOMES; docd / home/$HOMErsync-cdlptgov-delete. / $DRIVE/$HOMEfind. -maxdepth 1-type d-not-name "."-exec rsync-crlptgov-- delete {} / $DRIVE/$HOME\; done
The rsync command copies the files and directories it finds in the source directory. However, it leaves directories unprocessed, so we can iterate over them through the find command. This is done by passing the-d argument, which tells rsync not to recurse directories.
-d,-- dirs transfers directories without recursion
The find command then passes each directory to run rsync separately. Rsync then copies the contents of the directory. This is done by passing the-r parameter, which tells rsync to recurse the directory.
-r,-- recursive recursively enters the directory
This keeps the incremental files used by rsync at a reasonable size.
Most rsync guidelines are for ease of use of the-a (or archive) parameter. This is actually a compound parameter.
-a,-- archive archiving mode; equivalent to-rlptgoD (no-Hmam talk A meme X)
The other parameters I pass are contained in a; these are-l,-p,-t,-g, and-o.
-l,-- links copy symbolic links as symbolic links-- p,-- perms reserved permissions-t,-- times retain modification time-g,-- group reserved groups-o,-- owner reserved owners (for Super Admin only)
The-- delete option tells rsync to delete any files in the destination directory that do not exist in the source directory. In this way, the result of running is just replication. You can also exclude the .trash directory or the .DS _ Store file created by MacOS.
-not-name ".Trash *"-not-name ".DS _ Store" Note
Suggestion: rsync can be a destructive command. Fortunately, its wise creator provided the ability to "fly by air". If we add the n option, rsync displays the expected output but does not write any data.
`rsync-cdlptgovn-- delete. / $DRIVE/$ Home` Thank you for your reading! This is the end of the article on "how to use the advanced usage of rsync for large backups in linux". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.