In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Common hot backup scheme LVM scheme
Use the LVM technology of Linux to achieve hot backup, put the data directory of MySQL on the LVM logical volume, and then back up the contents of the logical volume through LVM snapshot technology. The first backup is a full backup, and all subsequent backups are incremental. When restoring, restore the data directory in the snapshot to the data directory in ySQL.
Using LVM technology can back up not only MySQL but also other databases such as MongoDB, but it is also troublesome to use LVM as a hot backup scheme, because you need to manually create logical volumes, migrate data directories, create snapshots, lock databases, and so on, so LVM is not a common hot backup scheme.
XtraBackup scheme
Because of the trouble of LVM, people want to use a professional tool to do hot backup, this tool is XtraBackup. XtraBackup is a free database hot backup tool open source by Percona, which can back up InnoDB databases and XtraDB storage engine databases without blocking. Because XtraBackup does not interrupt ongoing transactions during a backup, and the transaction log records what is written before the backup and what is written after the backup, there is no need to lock it.
In addition, XtraBackup provides the compression function of backup data, which can save the disk space and network bandwidth occupied by backup files. However, XtraBackup will add a read lock when backing up tables that use MyISAM as the storage engine, that is, the data in the table is readable but not writable, but this is not a problem. It was mentioned earlier that the problem of adding read locks can be solved by using online hot backup. Similarly, XtraBackup supports full backup and incremental backup. Because of the convenience of XtraBackup, XtraBackup is generally used as a hot backup solution.
Principle of XtraBackup hot backup
Because XtraBackup is the mainstream hot backup solution for MySQL, here is a brief introduction to the principle of XtraBackup hot backup:
XtraBackup is a physical backup tool that connects to the MySQL server through protocol, and then reads and copies the underlying data files to complete the physical backup. It should be noted that XtraBackup supports full backup and incremental backup of InnoDB, but only full backup of MyISAM
It is now known that XtraBackup completes the physical backup by reading and copying the underlying data files. Full backup is relatively simple, you can directly back up all the contents of the data file. Incremental backups, on the other hand, need to distinguish between new data and old data, and then back up only the new data, so it is slightly more complicated.
The principle of XtraBackup incremental backup is as follows:
Incremental backups only back up the new data, so XtraBackup reads the contents of the data file to determine which data is old and which is new, and then only backs up the new data. In the data file of MySQL, the data is stored in the structure of row, while row is stored in page, and page is stored in extend.
MySQL marks each page with an LSN number, and by comparing that number, you can tell which data is new and which is old. Then XtraBackup just needs to back up the new page data from the data file.
Schematic diagram:
So how does XtraBackup know which LSN is new? First of all, we need to know that LSN is a globally increasing number, and each time a change is made to the data in page, a new LSN number is generated. Suppose there are now six page, each with the following LSN number:
In the figure above, the representation with LSN number 3 has been modified 3 times, the representation with LSN number 5 has been modified 5 times, and so on. Assuming a full backup of these page at this time, all six page will be backed up. After a period of time, three of the page have been modified and the LSN number has changed, as follows:
When performing an incremental backup, XtraBackup will compare the LSN number of the previously backed up page with the LSN number of the corresponding page in the data file. If the LSN number in the data file is greater than the LSN number in the backup, it means that the page in the data file is new data, then XtraBackup will back it up. If it is equal to, it means that the data has not changed and no backup is made. Because the LSN number is incremented globally, there is no case of less than. This is how XtraBackup incremental backups work.
Install XtraBackup
Now that you know how XtraBackup and its hot backup implementation works, let's practice how to use the tool. First, you need to install the official yum repository:
[root@PXC-Node3 ~] # yum install-y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
Then activate the yum repository:
[root@PXC-Node3 ~] # percona-release setup ps80
You can now install XtraBackup locally through the yum command:
[root@PXC-Node3 ~] # yum install-y percona-xtrabackup-24
After the installation is complete, output the version information to verify that the xtrabackup command is available:
[root@PXC-Node3 ~] # xtrabackup-- versionxtrabackup: recognized server arguments:-- server-id=3-- datadir=/var/lib/mysql-- log_bin xtrabackup version 2.4.18 based on MySQL server 5.7.26 Linux (x86x64) (revision id: 29b4ca5) [root@PXC-Node3] # Common commands
Types of XtraBackup commands:
The serial number command describes the data used by 1xbcrypt to encrypt or decrypt backups. 2xbstream is used to compress or decompress xbstream files. 3xtrabackup is used to back up data tables that use InnoDB and XtraDB as storage engines. This command only backs up table data files, so 4innobackupex is usually used on incremental backups. 4innobackupex is the perl script encapsulation of the above three commands, and you can use this command to back up MyISAM data tables. And only by using this command can you back up table data files, table definition files, and table index files, so it is usually used for full hot backup.
Use the innobackupex command to back up the flowchart of the InnoDB data table:
Use the innobackupex command to make a full hot backup example:
[root@PXC-Node3 ~] # innobackupex-- defaults-file=/etc/my.cnf-- host=192.168.190.134-- user=admin-- password=Abc_123456-- port=3306 / home/backup--defaults-file: specify the path where the configuration file of MySQL is located, because XtraBackup needs to read the configuration file to know where the data directory is-- host: specify the ip address of the MySQL service, because the table structure needs to be backed up for full backup So you have to connect to MySQL with read lock-user: specify which MySQL user to use for backup-password:MySQL user password-port: specify the port number of the MySQL service / home/backup: the directory where the backup files are stored
The backup file directory is as follows:
[root@PXC-Node3] # ls / home/backup/2020-01-26cycles 10-33-29 [root@PXC-Node3] # ls / home/backup/2020-01-26cycles 10-33-29/backup-my.cnf ibdata1 performance_schema test xtrabackup_binlog_info xtrabackup_infoib_buffer_pool mysql sys tpcc xtrabackup_checkpoints xtrabackup_ logfile [root @ PXC-Node3 ~] #
This is the simplest example of a full hot backup. You can see that there are many directories and files backed up, and these backup files are not compressed. If the amount of data that needs to be backed up is relatively large, not compressing the backup files will take up a lot of storage space. However, using regular compression commands, such as tar, zip, and so on, requires two Imax O operations, because you have to back up the data using XtraBackup before the resulting backup files can be compressed. Therefore, when the amount of data backed up is large, the impact on the system Imax O and CPU is more obvious.
Fortunately, XtraBackup provides the function of streaming compression, through streaming compression, backup data can be written directly to compressed files, without having to back up first and then compress, so it only needs one Icord O operation. As follows:
To use streaming compression in a backup, you only need to specify the-- stream parameter, as shown in the following example:
[root@PXC-Node3 ~] # innobackupex-- defaults-file=/etc/my.cnf-- host=192.168.190.134-- user=admin-- password=Abc_123456-- port=3306-- no-timestamp-- stream=xbstream-> / home/backup/backup.xbstream--no-timestamp: specify no timestamp directory-- stream: specify the compression type to be used. Currently, only tar and xbstream/home/backup/backup.xbstream are supported: specify the name of the generated compressed file
When backing up some privacy-related data, we want to encrypt the backup file to prevent the impact of accidental disclosure of the backup file. XtraBackup also provides the function of encrypted backup. The parameters related to encryption are as follows:
The ordinal parameter describes the algorithm specified by 1--encrypt for encryption: AES123, AES192, AES2562--encrypt-threads specify the number of threads to perform encryption 3--encrypt-chunk-size specifies the cache size of the encryption thread, default 64kb, the size does not exceed the key string specified by 1M4--encrypt-key for encryption and decryption, and the length is at least 24 characters 5--encrypt-key-file specifies the path of the key file
Example of an encrypted backup:
[root@PXC-Node3] # innobackupex-- defaults-file=/etc/my.cnf-- host=192.168.190.134-- user=admin-- password=Abc_123456-- port=3306-- no-timestamp-- stream=xbstream-- encrypt=AES256-- encrypt-threads=10-- encrypt-chunk-size 512
Other commonly used parameters:
The serial number parameter describes that 1--compress is compressed for InnoDB data files. You can use 2--compress-threads to specify the number of threads to perform compression at the same time with the-- stream parameter. 3--compress-chunk-size specifies the cache size of the compression thread. The default 64kb, the size does not exceed the regular expression 5--galera-info of the data table specified by 1M4--include that needs to be backed up. Specify backup PXC node state files.
Examples of use:
[root@PXC-Node3] # innobackupex-- defaults-file=/etc/my.cnf-- host=192.168.190.134-- user=admin-- password=Abc_123456-- port=3306-- no-timestamp-- stream=xbstream-- encrypt=AES256-- encrypt-threads=10-- encrypt-chunk-size 512-- compress--compress-threads=10-- include=test.student Test.t_orders-- galera-info-> / home/backup/backup2.xbstream--include: indicates that only student and t _ orders tables under the test library should be backed up You can also write regular expressions for full cold reduction.
After the full hot backup is described above, let's take a look at how to restore the files backed up by XtraBackup. In the restore of this piece can only be cold restore, the so-called cold restore is to restore the database after downtime. The reason why there is no hot restore is that the online restore operation is carried out on a running database, while the user is reading and writing data, which may lead to data overwriting each other and make the data of the database confused.
Therefore, restoring this piece can only be a cold restore, and this section will demonstrate step by step how to use XtraBackup to restore backup files. First, shut down the MySQL service:
[root@PXC-Node3 ~] # systemctl stop mysqld
Clear the data catalog and table partition data directory:
[root@PXC-Node3 ~] # rm-rf / var/lib/mysql/* [root@PXC-Node3 ~] # rm-rf / mnt/p0/data/* [root@PXC-Node3 ~] # rm-rf / mnt/p1/data/*Tips: here it is deleted directly using rm because it is an example. If it is the actual running environment, it is recommended to use mv to rename the directory to be deleted. Finally, after restoring the complete file and verifying that there is no problem, delete it using rm. To avoid the tragedy of deleting the library and running away.
Because it is a hot backup, there may be some outstanding transactions in the transaction log, which requires rolling back uncommitted transactions and synchronizing committed transactions to data files. Here, take the full backup directory of 2020-01-26 and 10-33-29 as an example, execute the following command:
[root@PXC-Node3] # innobackupex-- apply-log / home/backup/2020-01-26 million 10-33-29 /
Then restore the backup file using the following command:
[root@PXC-Node3] # innobackupex-- defaults-file=/etc/my.cnf-- copy-back / home/backup/2020-01-26 million 10-33-29 /
Then give mysql user rights to the restored directory file:
[root@PXC-Node3 ~] # chown-R mysql:mysql / var/lib/mysql/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p0/data/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p1/data/*
At this point, the cold restore is completed, and finally start the MySQL service and verify whether the data is normal:
[root@PXC-Node3 ~] # systemctl start mysqld
The above is a demonstration of backup files that do not use streaming compression or encryption. If the backup file uses streaming compression, you need to use the xbstream command to extract it first. The following is an example:
# create a storage directory for decompressed files [root@PXC-Node3 ~] # mkdir / home/backup/temp [root@PXC-Node3 ~] # xbstream-x
< /home/backup/backup2.xbstream -C /home/backup/temp/ 如果备份文件使用了加密,则在解压之后还需要对其进行解密: [root@PXC-Node3 ~]# innobackupex --decompress --decrypt=AES256 --encrypt-key='1K!cNoq\&RUfQsY\&\&LAczTjco' /home/backup/tempTips:因为&是特殊字符,所以需要使用\转义一下。其中--decompress是与--compress对应的,用于解压被压缩的InnoDB数据文件,与解密无关增量热备份 增量热备份必须以全量热备份为基础进行备份,所以在了解了XtraBackup的全量热备份和全量冷还原后,接下来就可以实践XtraBackup的增量热备份了。 注意事项: 无论全量热备份使用了流式压缩还是内容加密,都必须解压或解密成普通的备份目录增量热备份也同样可以使用流式压缩和内容加密 这里以2020-01-26_10-33-29这个全量备份目录作为示例,增量热备份命令如下: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --host=192.168.190.134 --user=admin --password=Abc_123456 --port=3306 --incremental-basedir=/home/backup/2020-01-26_10-33-29/ --incremental /home/backup/increment--incremental-basedir:指定全量备份文件所存储的目录,即基于哪个全量备份进行增量备份--incremental:指定采用增量备份/home/backup/increment:增量备份文件所存放的目录 增量备份的文件目录如下: [root@PXC-Node3 ~]# ls /home/backup/increment/2020-01-26_17-02-21[root@PXC-Node3 ~]# ls /home/backup/increment/2020-01-26_17-02-21/backup-my.cnf ibdata1.delta mysql sys tpcc xtrabackup_checkpoints xtrabackup_logfileib_buffer_pool ibdata1.meta performance_schema test xtrabackup_binlog_info xtrabackup_info[root@PXC-Node3 ~]# 可以使用du命令对比一下全量热备份与增量热备份的目录大小: [root@PXC-Node3 ~]# du -sh /home/backup/increment/2020-01-26_17-02-21/3.3M /home/backup/increment/2020-01-26_17-02-21/ # 增量热备份的目录大小[root@PXC-Node3 ~]# du -sh /home/backup/2020-01-26_10-33-29/836M /home/backup/2020-01-26_10-33-29/ # 全量热备份的目录大小[root@PXC-Node3 ~]# 之后的第二次增量备份就可以不基于全量备份,而是基于第一次的增量备份,这样每次的增量备份都是一个备份点就像快照一样。如下示例: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --user=admin --password=Abc_123456 --incremental-basedir=/home/backup/increment/2020-01-26_17-02-21/ --incremental /home/backup/increment 如果增量备份时需要使用流式压缩和内容加密,则添加相关参数即可。如下示例: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --user=admin --password=Abc_123456 --incremental-basedir=/home/backup/increment/2020-01-26_17-02-21/ --incremental --stream=xbstream --encrypt=AES256 --encrypt-threads=10 --encrypt-chunk-size 512 --encrypt-key='1K!cNoq&RUfQsY&&LAczTjco' ./ >/ home/backup/incrementTips: the. / here means that all the contents of the incremental backup are written to the streaming compressed file, and the compressed file is stored in the / home/backup/increment directory for incremental cold restore.
As you can see from the above section, incremental hot backup only backs up new data, and the backup directory volume generated is much smaller than that generated by full hot backup. So how does XtraBackup restore incremental backup data to a database? In fact, it is also very simple, that is, first merge the incremental hot backup data with the full hot backup data, and then restore it based on the merged backup data.
Incremental hot backups can have many backup points because, except for the first incremental hot backup, all incremental hot backups are based on the last incremental hot backup. So when restoring, you can choose any backup point to restore, but the processing steps of the transaction log are different from the full cold restore.
In the previous demonstration of full cold restore, there was a step for dealing with transaction logs, as did incremental cold restore, but there were some differences. As mentioned above, incremental hot backup can have multiple backup points, so when restoring a backup point, you need to deal with the transaction logs of that backup point and its previous backup point, otherwise data confusion will occur. As shown in the following figure, there are three backup points:
For example, when restoring incremental backup 1, you need to first process the transaction log of its previous backup point, that is, the full hot backup in the figure. Then process the transaction log of the backup point "incremental backup 1", and then merge the data of "incremental backup 1" into the "full hot backup". Only in this way can we ensure that the data after multiple backup points are merged into the full backup point is consistent, and finally restore the data in the "full hot backup".
For example, if you want to restore "incremental backup 2", you have to deal with "full hot backup" first, then "incremental backup 1", and then "incremental backup 2". After the transaction logs of the three backup points are processed in the order from then to, the data of the backup points can be merged into the full backup, and finally the data in the "full hot backup" can be restored. The rest are and so on.
Next, let's do the incremental cold restore. Here are three backup point directories corresponding to the above figure:
/ home/backup/2020-01-26 hot backup 10-33-29 / # full hot backup / home/backup/increment/2020-01-27 hot 10-11-24 / # incremental backup 1/home/backup/increment/2020-01-27 million 10-15-11 / # incremental backup 2
Because it is a cold restore, you have to shut down the MySQL service first:
[root@PXC-Node3 ~] # systemctl stop mysqld
In this example, you want to restore the data of the backup point "incremental backup 2". As described earlier, first deal with the transaction log of the full backup point, and execute the following command:
[root@PXC-Node3 ~] # innobackupex-- apply-log-- redo-only / home/backup/2020-01-26 transactions 10-33-29/--redo-only: specify that uncommitted transactions will not be rolled back, because uncommitted transactions at that backup point may be committed in the transaction log of the next backup point. If it is rolled back, the next backup point will not be submitted properly.
Then process the transaction log for incremental backup 1 and merge the data from incremental backup 1 into the full backup point:
[root@PXC-Node3 ~] # innobackupex-- apply-log-- redo-only / home/backup/2020-01-26 backup 10-33-29 /-- incremental-dir=/home/backup/increment/2020-01-27 backup 10-11-24/--incremental-dir: specify the incremental backup directory to be merged into the full backup
The transaction log for incremental backup 2 is then processed and the data from incremental backup 2 is merged into the full backup point. Since you only restore to the backup point "incremental backup 2", you don't need to add the-- redo-only parameter, because there is no next backup point:
[root@PXC-Node3] # innobackupex-- apply-log / home/backup/2020-01-26 million 10-33-29 /-- incremental-dir=/home/backup/increment/2020-01-27 million 10-15-11 /
As with full cold restore, you also need to empty the data catalog and table partition data directory:
[root@PXC-Node3 ~] # rm-rf / var/lib/mysql/* [root@PXC-Node3 ~] # rm-rf / mnt/p0/data/* [root@PXC-Node3 ~] # rm-rf / mnt/p1/data/*
After completing the above steps, you can complete the restore of the backup files using the following command:
[root@PXC-Node3] # innobackupex-- defaults-file=/etc/my.cnf-- copy-back / home/backup/2020-01-26 million 10-33-29 / # Note that the directory of the full backup point is specified here
Then give mysql user rights to the restored directory file:
[root@PXC-Node3 ~] # chown-R mysql:mysql / var/lib/mysql/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p0/data/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p1/data/*
At this point, the restore is completed. Finally, start the MySQL service and verify whether the data is normal:
[root@PXC-Node3 ~] # systemctl start mysqld
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.