Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of various backup and restore postures of database

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Cold backup and hot backup of database

Data export is not exactly equal to data backup:

Data export refers to the reverse of the data in the database into SQL statements for export, so the export is the SQL file. Usually used to migrate data from one system to another, the purpose is to shield the difference between systems. Data backup refers to copying the relevant files stored in the database to save all the physical data of a database, so the backed-up data is completely consistent with the original data in details and status. Unlike SQL, when some functions are used, different results may be produced at different points in time or on different systems.

Cold backup and hot backup:

Cold backup: when the database has been closed, the backup of data is called cold backup and hot backup: in contrast to cold backup, backup when the database node is not down is called hot backup.

Restrictions on cold backup:

The database must be backed up offline, which takes up a lot of storage space for data files that are unacceptable to some online databases, and does not support incremental backup. Cold backup is to back up all data files and log files, so you cannot back up a logical library and data table separately.

Online cold backup:

A single-node database needs to be shut down during cold backup, which will have an impact on the business system. To solve this problem, we can set up a cluster and select a node in the cluster for cold backup. Since there are other nodes running in the cluster, you don't have to worry about affecting the running system. Start the node after the backup is finished, so that the impact of downtime backup can be solved.

Restrictions on hot backup:

During the hot backup, the database will be globally read locked. During the backup, the node can only read data, not write data.

Online hot backup:

In the same way, in order to avoid global locking, we can select a node in the cluster to unsynchronize data with other nodes for hot backup. Wait for the backup to complete, then restore the data synchronization relationship with other nodes in the cluster. In this way, only this node will add a read lock during the backup process, and other nodes will not be affected.

How to choose between online hot backup and online cold backup:

Online hot backup is recommended, because hot backup can choose full backup or incremental backup. Cold backup can only choose full backup, when the amount of data in the later stage is very large, cold backup takes a lot of time, and because it is full backup also needs to occupy more storage space. Hot backup only needs to select full backup at the first backup, and subsequent backups only need to back up new data incrementally. Therefore, hot backup is better than cold backup in terms of storage space and backup time.

Practice online cold backup to recognize data-related files in MySQL

The previous section mentioned that data backup refers to copying the relevant files stored in the database, and there are many of these files, so let's take a brief look at the data-related files in MySQL.

The first is the files that make up the logic library. In MySQL, a logic library is actually composed of multiple files, and its structure is as follows:

OPT file: defines the character set and the collation of the character set, which is a text file FRM file: this is the definition file of the data table, which contains the structure information of the data table. Regardless of the storage engine used in the data table, the definition file for each data table must be a FRM file ISL file: this file will only appear when a table partition is created, and it stores the path where the table partition data file is located: MyISAM:MYD file: MyISAM data file MYI file: MyISAM index file InnoDB:IBD file: InnoDB index and data file, which is a binary file

The FRM file is used in many cases, for example, the database examines whether some of the fields used in the SQL statement are defined in the FRM file before executing the SQL statement. Or when the database optimizes the SQL statement, the FRM file is used to determine whether the field in the where clause is a primary key column or an index column. Because the FRM file is often used, it is a binary file

Other files in MySQL:

Auto.cnf file: this file stores the UUID of the MySQL instance, namely server-uuid, and can be used as a unique identification grastate.dat file for nodes in the cluster: this file saves the synchronization information of PXC: gvwstate.dat file: this file stores the information of other nodes in the PXC cluster .pem: this file stores certificate and key information for encryption and decryption .sock: socket file for local connection MySQL.err: error log file MySQL all error messages are saved in this file. Pid: MySQL process id file ib_buffer_pool:InnoDB cache file ib_logfile:InnoDB transaction log (redo) ibdata:InnoDB shared tablespace file logbin: log file (binlog) index: log index file ibtmp: temporary tablespace file data file defragmentation

What is the fragment in the data file:

We all know that when you write data to a data table, the size of the data file increases. However, when the data in the data file is deleted, the volume of the data file does not decrease, and the blank space left after the data is deleted is called fragmentation.

There has always been a problem of fragmentation of MySQL data files, and the reason why MySQL does not automatically defragment and reduce the size of data files is that the defragmentation process locks the table. If the table is defragmented every time a piece of data is deleted, it is bound to have a great impact on the reading and writing of the data table. However, when MySQL writes new data, it first writes it to the fragment space, so the debris space in the data file has little impact on day-to-day operation.

But for the scenario of database backup, if there are a large number of fragments in the data file, it will result in not much meaningful data, and the size of the data file is very large. This takes up storage space and transmission bandwidth for backup, so it is necessary to defragment the data file before backup to reduce the size of the data file as much as possible.

The SQL statement to defragment data files in MySQL is as follows:

Alter table ${table_name} engine=InnoDB

It is important to note that before executing the SQL, remember to turn off the binlog of the database node used for backup. Avoid recording the SQL in the binlog, resulting in the entire cluster table locked by other nodes synchronizing the SQL after the node backup is restored to the cluster. Therefore, you need to comment out the following two parameters in the MySQL configuration file and open it after the backup is complete:

# log_bin# log_slave_updates cold backup

After introducing some of the previous knowledge, this section demonstrates step by step how to practice online cold backup. I have prepared a PXC cluster consisting of three nodes in advance:

First select any node in the cluster as the backup node, and then stop that node:

[root@PXC-Node3 ~] # systemctl stop mysqld

Edit the configuration file and comment on the parameters related to binlog:

[root@PXC-Node3 ~] # vim / etc/percona-xtradb-cluster.conf.d/mysqld.cnf [mysqld]... # log-bin#log_slave_updates

Then annotate the parameters related to PXC cluster:

[root@PXC-Node3 ~] # vim / etc/percona-xtradb-cluster.conf.d/wsrep.cnf [mysqld] # wsrep_provider=/usr/lib64/galera3/libgalera_smm.so#wsrep_cluster_address=gcomm://192.168.190.132192.168.190.133192.168.190.134#wsrep_slave_threads=8#wsrep_log_conflicts#wsrep_cluster_name=pxc-cluster#wsrep_node_name=pxc-node-03#pxc_strict_mode=DISABLED#wsrep_sst_method=xtrabackup-v2#wsrep_ Node_address=192.168.190.134#wsrep_sst_auth=admin:Abc_123456

After the modification, start the MySQL service, and the node exits the PXC cluster:

[root@PXC-Node3 ~] # systemctl start mysqld

The data file for this node has 1.1 gigabytes:

[root@PXC-Node3 ~] # du-h / var/lib/* | grep / var/lib/mysql$1.1G / var/lib/mysql [root@PXC-Node3 ~] #

In this example, the test library is backed up, so all tables under the test library need to be defragmented before backing up. Because of the large number of tables, I have written a simple Java program here to implement:

Import com.mysql.jdbc.Driver;import java.sql.*;import java.util.ArrayList;/** * data Table defragmentation * * @ author 01 * @ date 2020-01-25 * * / public class CleanFragments {public static void main (String [] args) throws SQLException {DriverManager.registerDriver (new Driver ()); String url = "jdbc:mysql://192.168.190.134:3306/test?useSSL=false"; String username = "admin" String password = "Abc_123456"; try (Connection connection = DriverManager.getConnection (url, username, password); PreparedStatement pst = connection.prepareStatement ("show tables;")) {ResultSet resultSet = pst.executeQuery (); ArrayList tableNames = new ArrayList (); while (resultSet.next ()) {tableNames.add (resultSet.getString (1)) } for (String tableName: tableNames) {if ("t_range_1" .equals (tableName)) {continue;} System.out.println ("defragment" + tableName + "table fragmentation..."); pst.execute ("alter table" + tableName + "engine=InnoDB;") }} System.out.println ("finished...");}}

After the defragmentation is complete, stop the MySQL service because cold backup requires downtime:

[root@PXC-Node3 ~] # systemctl stop mysqld

Then you can start the backup, in fact, the backup process is also very simple, do not use any high-end special technology, is to use the tar command to MySQL data directory into a compressed package. For example, my data directory here is / var/lib/mysql, so the command executed is as follows:

# go to the / var/lib/ directory [root@PXC-Node3 ~] # cd / var/lib/# and package the data directory. Mysql.tar.gz is the packaged file name and stored in the / home directory [root@PXC-Node3 / var/lib] # tar-zcvf / home/mysql.tar.gz. / mysqlTips: the data directory is defined by the datadir parameter in the configuration file.

If you create a table partition and map the table partition to another directory, you also need to package the table partition. For example, I have two table partitions here that map to the / mnt/p0/data/ and / mnt/p1/data/ directories, so the command is as follows:

[root@PXC-Node3 ~] # cd / mnt/ [root@PXC-Node3 / mnt] # tar-zcvf / home/p0.tar.gz. / p0/data/ [root@PXC-Node3 / mnt] # tar-zcvf / home/p1.tar.gz. / p1/data/

When this backup is complete, restore the configuration items annotated in the configuration file, and then restart the node to rejoin the PXC cluster. Since there is nothing special to explain, I will not demonstrate it here.

Cold reduction

After demonstrating how to make a cold backup, the next step is to demonstrate how to cold restore the backup file to another PXC node. First transfer the backup files to the node that needs to be restored, and you can use the scp or rsync command to transfer files between Linux systems, as shown in the following example:

[root@PXC-Node3 / home] # scp. / mysql.tar.gz 192.168.190.133:/home/mysql.tar.gz [root@PXC-Node3 / home] # scp. / p0.tar.gz 192.168.190.133:/home/p0.tar.gz [root@PXC-Node3 / home] # scp. / p1.tar.gz 192.168.190.133:/home/p1.tar.gz

The backup files received by the restore node are as follows:

[root@PXC-Node2 ~] # cd / home/ [root@PXC-Node2 / home] # lsmysql.tar.gz p0.tar.gz p1.tar.gz [root@PXC-Node2 / home] #

In addition, some preparatory work needs to be done, because the backup node has a table partition and maps the corresponding data directory, which needs to be created if the restore node does not exist. The following is an example:

[root@PXC-Node2 ~] # mkdir / mnt/p0/ [root@PXC-Node2 ~] # mkdir / mnt/p1/

Because it is a cold restore, like a cold backup, you need to stop restoring the node first:

[root@PXC-Node2 ~] # systemctl stop mysqld

Restore the data directory of MySQL with the following command:

# back up the original data directory in case [root@PXC-Node2] # mv / var/lib/mysql / var/lib/mysql-backup# unzips the compressed file to the / var/lib/ directory [root@PXC-Node2 / home] # tar-zxvf mysql.tar.gz-C / var/lib/

Then restore the table partition data directory:

[root@PXC-Node2 / home] # tar-zxvf p0.tar.gz-C / mnt/ [root@PXC-Node2 / home] # tar-zxvf p1.tar.gz-C / mnt/

Delete the auto.cnf file, otherwise if the uuid is duplicated, the node cannot be started:

[root@PXC-Node2 ~] # rm-rf / var/lib/mysql/auto.cnf

If you are using the first node in the PXC cluster as the backup node, you also need to change the safe_to_bootstrap parameter in the grastate.dat file to 0, but the normal node does not. The following is an example:

[root@PXC-Node2 ~] # vim / var/lib/mysql/grastate.dat...safe_to_bootstrap: 0

Now that the restore is complete, start the MySQL service:

[root@PXC-Node2 ~] # systemctl start mysqld

The rest is to verify whether the data of the node has been restored correctly and whether it can synchronize with other nodes in the cluster. And finally clean up the old data directory that was backed up before:

[root@PXC-Node2 ~] # rm-rf / var/lib/mysql-backup

Practical use of cold backup:

When the PXC cluster needs to add new nodes, you can use cold backup to back up the data of the existing nodes, and then restore to the new PXC nodes. Let the newly launched nodes have initial data to avoid full data synchronization with the nodes in the cluster after launch, thus triggering the flow control mechanism of the PXC cluster, resulting in common hot backup schemes that affect the performance of the whole cluster.

After the above sections, now that we have learned about cold backup and cold restore, let's learn about hot backup from this section. Hot backup is the backup of data when the database is running, and it is also the most difficult backup. There are two common hot backup schemes in PXC cluster: LVM and XtraBackup.

LVM scheme

Use the LVM technology of Linux to achieve hot backup, put the data directory of MySQL on the LVM logical volume, and then back up the contents of the logical volume through LVM snapshot technology. The first backup is a full backup, and all subsequent backups are incremental. When restoring, restore the data directory in the snapshot to the data directory in ySQL.

Using LVM technology can back up not only MySQL but also other databases such as MongoDB, but it is also troublesome to use LVM as a hot backup scheme, because you need to manually create logical volumes, migrate data directories, create snapshots, lock databases, and so on, so LVM is not a common hot backup scheme.

XtraBackup scheme

Because of the trouble of LVM, people want to use a professional tool to do hot backup, this tool is XtraBackup. XtraBackup is a free database hot backup tool open source by Percona, which can back up InnoDB databases and XtraDB storage engine databases without blocking. Because XtraBackup does not interrupt ongoing transactions during a backup, and the transaction log records what is written before the backup and what is written after the backup, there is no need to lock it.

In addition, XtraBackup provides the compression function of backup data, which can save the disk space and network bandwidth occupied by backup files. However, XtraBackup will add a read lock when backing up tables that use MyISAM as the storage engine, that is, the data in the table is readable but not writable, but this is not a problem. It was mentioned earlier that the problem of adding read locks can be solved by using online hot backup. Similarly, XtraBackup supports full backup and incremental backup. Because of the convenience of XtraBackup, XtraBackup is generally used as a hot backup solution.

About the installation and use of XtraBackup can refer to my other article: database hot backup artifact-XtraBackup, which will not be repeated here. Scheduled full hot backup

There are usually both full and incremental backups in the system in case there is a problem with one backup and another backup can be used. Because full hot backup is time-consuming, it is not usually performed frequently and is performed only once at intervals. For example, execution at zero on the first of every month or at zero on Monday, etc.

There is a crontab command in Linux that can execute specified system instructions or shell scripts at regular intervals. A scheduled full hot backup can be achieved by using the crontab command combined with the shell script.

Here is an example. First, write a shell script to perform a full hot backup as follows:

[root@PXC-Node3 ~] # vim fullMutual backup.shangxinxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx LAczTjco'-> / home/backup/fullBackupOfMysql.xbstream

Give the script execution permissions:

[root@PXC-Node3] # chmod-R 777 full-backup.sh

Finally, you can configure crontab. For example, I define to execute every Monday at 0: 00, so as to achieve a scheduled full hot backup:

[root@PXC-Node3] # crontab-e# performs 0 0 * * 1 / root/full-backup.sh > / home/backup/full-backup.log 2 > & 1XtraBackup full cold restore every Monday zero

After the full hot backup is described above, let's take a look at how to restore the files backed up by XtraBackup. In the restore this piece can only be cold restore, there is no hot restore, because the online restore operation to a running database, while the user is reading and writing data, which may lead to data overwriting each other, making the data in the database confused.

Therefore, restore this piece can only be cold restore, previously also introduced cold restore, but using XtraBackup for cold restore will be easier, not as troublesome as restoring cold backup.

First, shut down the MySQL service:

[root@PXC-Node3 ~] # systemctl stop mysqld

Clear the data catalog and table partition data directory:

[root@PXC-Node3 ~] # rm-rf / var/lib/mysql/* [root@PXC-Node3 ~] # rm-rf / mnt/p0/data/* [root@PXC-Node3 ~] # rm-rf / mnt/p1/data/*Tips: here it is deleted directly using rm because it is an example. If it is the actual running environment, it is recommended to use mv to rename the directory to be deleted. Finally, after restoring the complete file and verifying that there is no problem, delete it using rm. To avoid the tragedy of deleting the library and running away.

The backup files are compressed, so you need to create a temporary directory to store the extracted files:

[root@PXC-Node3 ~] # mkdir / home/backup/temp

Then use the xbstream command to extract the backup file to this directory:

[root@PXC-Node3 ~] # xbstream-x

< /home/backup/fullBackupOfMysql.xbstream -C /home/backup/temp/ 因为备份文件时进行了加密,所以解压后的文件都是加密的,需要解密备份文件: [root@PXC-Node3 ~]# innobackupex --decrypt=AES256 --encrypt-key='1K!cNoq\&RUfQsY\&\&LAczTjco' /home/backup/tempTips:因为&是特殊字符,所以需要使用\转义一下 由于是热备份,所以事务日志中可能会存在一些未完成的事务,这就需要回滚没有提交的事务,以及同步已经提交的事务到数据文件。执行如下命令: [root@PXC-Node3 ~]# innobackupex --apply-log /home/backup/temp 完成以上步骤后,就可以使用以下命令对备份文件进行还原: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --copy-back /home/backup/temp 接着给还原后的目录文件赋予mysql用户权限: [root@PXC-Node3 ~]# chown -R mysql:mysql /var/lib/mysql/* 到此为止就完成了冷还原,最后启动MySQL服务并自行验证下数据是否正常即可: [root@PXC-Node3 ~]# systemctl start mysqld增量热备份 增量热备份必须以全量热备份为基础进行备份,所以在了解了XtraBackup的全量热备份和全量冷还原后,接下来就可以实践XtraBackup的增量热备份了。 注意事项: 无论全量热备份使用了流式压缩还是内容加密,都必须解压或解密成普通的备份目录增量热备份也同样可以使用流式压缩和内容加密 之前演示冷还原的时候已经对全量备份的文件进行了解压缩和内容解密,所以这里以/home/backup/temp/备份目录为例,增量热备份命令如下: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --host=192.168.190.134 --user=admin --password=Abc_123456 --port=3306 --incremental-basedir=/home/backup/temp/ --incremental /home/backup/increment--incremental-basedir:指定全量备份文件所存储的目录,即基于哪个全量备份进行增量备份--incremental:指定采用增量备份/home/backup/increment:增量备份文件所存放的目录 增量备份的文件目录如下: [root@PXC-Node3 ~]# ls /home/backup/increment/2020-01-26_17-02-21[root@PXC-Node3 ~]# ls /home/backup/increment/2020-01-26_17-32-41/backup-my.cnf ibdata1.delta mysql sys tpcc xtrabackup_checkpoints xtrabackup_logfileib_buffer_pool ibdata1.meta performance_schema test xtrabackup_binlog_info xtrabackup_info[root@PXC-Node3 ~]# 可以使用du命令对比一下全量热备份与增量热备份的目录大小: [root@PXC-Node3 ~]# du -sh /home/backup/temp/1.6G /home/backup/temp/ # 全量热备份的目录大小[root@PXC-Node3 ~]# du -sh /home/backup/increment/2020-01-26_17-32-41/92M /home/backup/increment/2020-01-26_17-32-41/ # 增量热备份的目录大小[root@PXC-Node3 ~]# 之后的第二次增量备份就可以不基于全量备份,而是基于第一次的增量备份,如下示例: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --user=admin --password=Abc_123456 --incremental-basedir=/home/backup/increment/2020-01-26_17-32-41/ --incremental /home/backup/increment 如果增量备份时需要使用流式压缩和内容加密,则添加相关参数即可。如下示例: [root@PXC-Node3 ~]# innobackupex --defaults-file=/etc/my.cnf --user=admin --password=Abc_123456 --incremental-basedir=/home/backup/increment/2020-01-26_17-32-41/ --incremental --stream=xbstream --encrypt=AES256 --encrypt-threads=10 --encrypt-chunk-size 512 --encrypt-key='1K!cNoq&RUfQsY&&LAczTjco' ./ >

/ home/backup/incrementTips: here. / means that all the contents of the incremental backup are written to the streaming compressed file, and the compressed file is stored in the / home/backup/increment directory of the Java program scheduled incremental hot backup database.

Usually, we will let the incremental hot backup be carried out automatically as a scheduled task, so as to avoid manual fixed-point operation and save unnecessary workload. In the full hot backup, it is introduced to use the crontab command of Linux to realize the timing execution of shell script, and some mainstream programming languages also basically have the framework or class library to realize the timing task.

Here, take Java as an example. In the ecology of Java, there are Quartz and Spring frameworks that can implement scheduled tasks, as well as using Cron expression syntax. But Java's Cron expression can be accurate to seconds, which is different from Linux's Cron expression.

Because Quartz is a little more complex, let's take Spring as an example for simplicity. First create a Spring Boot project. The dependencies in pom.xml are as follows:

Org.springframework.boot spring-boot-starter org.projectlombok lombok true

Add @ EnableScheduling annotation to the bootstrap class to turn on the timing scheduling feature:

@ EnableScheduling@SpringBootApplicationpublic class IncrementBackupApplication {...}

Create a file on Linux to record the directory on which to back up each incremental hot backup. For example, the first incremental hot backup is based on the directory of the full hot backup, while the incremental hot backup after that is based on the directory of the last incremental hot backup:

[root@PXC-Node3 ~] # echo'/ home/backup/temp/' > / home/backup/increment-backup.cnf

Then write a Java class that performs incremental hot backups on a regular basis:

Package com.example.incrementbackup.task;import lombok.extern.slf4j.Slf4j;import org.springframework.scheduling.annotation.Scheduled;import org.springframework.stereotype.Component;import java.io.*;import java.time.LocalDateTime;import java.time.format.DateTimeFormatter / * * * incremental hot backup scheduled task * * @ author 01 * @ date 2020-01-26 * * / @ Slf4j@Componentpublic class IncrementBackupTask {/ * * * perform incremental hot backup every minute * of course, the actual situation will not set such a short interval * / @ Scheduled (cron = "0 * / 1 *") public void backup () {DateTimeFormatter formatter = DateTimeFormatter.ofPattern ("yyyy-MM-dd_HH_mm_ss") String folderName = LocalDateTime.now (). Format (formatter); String configPath = "/ home/backup/increment-backup.cnf"; try (FileReader fileReader = new FileReader (configPath); BufferedReader bufferedReader = new BufferedReader (fileReader)) {String basedir = bufferedReader.readLine (); String cmd = getCmd (basedir, folderName); log.info ("start incremental hot backup. Command executed: {} ", cmd); / / execute incremental hot backup command Process process = Runtime.getRuntime () .exec (cmd); / / wait for command execution to complete process.waitFor (); try (FileWriter fileWriter = new FileWriter (configPath)) BufferedWriter bufferedWriter = new BufferedWriter (fileWriter) {/ / update the basedir path used by the next incremental backup bufferedWriter.write ("/ home/backup/increment/" + folderName); log.info ("incremental hot backup ends");} catch (IOException | InterruptedException e) {log.error ("", e) }} / * assemble innobackupex command parameters * / private String getCmd (String basedir, String folderName) {String cmd = "innobackupex-- defaults-file=/etc/my.cnf" + "--user=admin-- password=Abc_123456" + "--incremental-basedir=%s-- no-timestamp" + "--incremental / home/backup/increment/%s" Return String.format (cmd, basedir, folderName);}}

After writing the above code, use maven to package the project into a jar package, and then upload the jar package to Linux and execute it through the java-jar command. As follows:

[root@PXC-Node3 ~] # java-jar increment-backup-0.0.1-SNAPSHOT.jar

The log information output during execution is as follows:

After waiting for the backup to finish, you can see that the contents of the increment-backup.cnf file have also been updated:

[root@PXC-Node3] # cat / home/backup/increment-backup.cnf/home/backup/increment/2020-01-26 "21" 0600 [root@PXC-Node3 ~] #

The resulting backup directory structure is also normal:

[root@PXC-Node3 ~] # ls / home/backup/increment/2020-01-26_21_12_00backup-my.cnf ibdata1.delta mysql sys tpcc xtrabackup_checkpoints xtrabackup_logfileib_buffer_pool ibdata1.meta performance_schema test xtrabackup_binlog_info xtrabackup_ info [root @ PXC-Node3 ~] #

So far, we have implemented a scheduled incremental hot backup database using the Java language. The reason for introducing how to use programming languages to implement is that in actual enterprise applications, there may be some more complex or personalized requirements, which can not be achieved simply by using shell scripts. For example, it is required to send an e-mail or text message to the relevant personnel after the backup is completed, or the interval between regular execution can be controlled on the UI, and so on. This requirement can only be realized by using a programming language to customize the development.

Incremental cold reduction

As you can see from the above section, incremental hot backup only backs up new data, and the backup directory volume generated is much smaller than that generated by full hot backup. So how does XtraBackup restore incremental backup data to a database? In fact, it is also very simple, that is, first merge the incremental hot backup data with the full hot backup data, and then restore it based on the merged backup data.

Incremental hot backups can have many backup points because, except for the first incremental hot backup, all incremental hot backups are based on the last incremental hot backup. So when restoring, you can choose any backup point to restore, but the processing steps of the transaction log are different from the full cold restore.

In the previous demonstration of full cold restore, there was a step for dealing with transaction logs, as did incremental cold restore, but there were some differences. As mentioned above, incremental hot backup can have multiple backup points, so when restoring a backup point, you need to deal with the transaction logs of that backup point and its previous backup point, otherwise data confusion will occur. As shown in the following figure, there are three backup points:

For example, when restoring incremental backup 1, you need to first process the transaction log of its previous backup point, that is, the full hot backup in the figure. Then process the transaction log of the backup point "incremental backup 1", and then merge the data of "incremental backup 1" into the "full hot backup". Only in this way can we ensure that the data after multiple backup points are merged into the full backup point is consistent, and finally restore the data in the "full hot backup".

For example, if you want to restore "incremental backup 2", you have to deal with "full hot backup" first, then "incremental backup 1", and then "incremental backup 2". After the transaction logs of the three backup points are processed in the order from then to, the data of the backup points can be merged into the full backup, and finally the data in the "full hot backup" can be restored. The rest are and so on.

Next, let's do the incremental cold restore. Here are three backup point directories corresponding to the above figure:

/ home/backup/temp/ # full hot backup / home/backup/increment/2020-01-27 million 10-11-24 / # incremental backup 1/home/backup/increment/2020-01-27 million 10-15-11 / # incremental backup 2

Because it is a cold restore, you have to shut down the MySQL service first:

[root@PXC-Node3 ~] # systemctl stop mysqld

In this example, you want to restore the data of the backup point "incremental backup 2". As described earlier, first deal with the transaction log of the full backup point, and execute the following command:

[root@PXC-Node3 ~] # innobackupex-- apply-log-- redo-only / home/backup/temp/--redo-only: specifies that uncommitted transactions will not be rolled back, because uncommitted transactions at that backup point may be committed in the transaction log of the next backup point. If it is rolled back, the next backup point will not be submitted properly.

Then process the transaction log for incremental backup 1 and merge the data from incremental backup 1 into the full backup point:

[root@PXC-Node3 ~] # innobackupex-- apply-log-- redo-only / home/backup/temp/-- incremental-dir=/home/backup/increment/2020-01-27backup 10-11-24/--incremental-dir: specify the incremental backup directory to be merged into the full backup

The transaction log for incremental backup 2 is then processed and the data from incremental backup 2 is merged into the full backup point. Since you only restore to the backup point "incremental backup 2", you don't need to add the-- redo-only parameter, because there is no next backup point:

[root@PXC-Node3] # innobackupex-- apply-log / home/backup/temp/-- incremental-dir=/home/backup/increment/2020-01-278 10-15-11 /

As with full cold restore, you also need to empty the data catalog and table partition data directory:

[root@PXC-Node3 ~] # rm-rf / var/lib/mysql/* [root@PXC-Node3 ~] # rm-rf / mnt/p0/data/* [root@PXC-Node3 ~] # rm-rf / mnt/p1/data/*

After completing the above steps, you can complete the restore of the backup files using the following command:

[root@PXC-Node3 ~] # innobackupex-- defaults-file=/etc/my.cnf-- copy-back / home/backup/temp/ # Note that the directory of the full backup point is specified here

Then give mysql user rights to the restored directory file:

[root@PXC-Node3 ~] # chown-R mysql:mysql / var/lib/mysql/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p0/data/* [root@PXC-Node3 ~] # chown-R mysql:mysql / mnt/p1/data/*

At this point, the restore is completed. Finally, start the MySQL service and verify whether the data is normal:

[root@PXC-Node3 ~] # systemctl start mysqld

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report