Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize backup and recovery between Mysql and xtrabackup

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Let's learn about Mysql and xtrabackup how to achieve backup and recovery, I believe you will benefit a lot after reading, the text is not much, hope that Mysql and xtrabackup how to achieve backup and recovery of this short content is what you want.

Mysql backup recovery and xtrabackup backup

1.1 reasons for backup

Backup is the last line of defense of data security, for any data loss scenario, backup can not necessarily recover 100% of the data (depending on the backup cycle), but at least minimize the loss. There are two important metrics to measure backup recovery: recovery point objective (RPO) and recovery time objective (RTO). The former focuses on the extent to which recovery can be restored, while the latter focuses on how long it will take to restore.

1.1.1 Directory of backup

Disaster recovery: recover and restore corrupted data

Requirements change: due to changes in requirements, you need to restore the data to before the change.

Testing: testing whether new features are available

1.1.2 issues to be considered in backup

How long can you tolerate data loss?

How long will it take to recover the data?

Is it necessary to provide continuous service during recovery?

The recovered object is the whole library, multiple tables, or a single library, a single table.

1.1.3 Type of backup

Hot backup:

These dynamic backups are carried out in the process of reading or modifying data, with little or no interruption in the function of transmitting or processing data. When using hot backup, the system is still accessible to read and modify data operations.

Cold backup:

These backups are made when the user cannot access the data, so the data cannot be read or modified. These offline backups prevent any activity that uses data. These types of backups do not interfere with the performance of a functioning system. However, for some applications, it is unacceptable that users must be locked or completely blocked from accessing data for a long period of time.

Warm backup:

These backups take place when the data is read, but in most cases, the data itself cannot be modified during the backup. The advantage of this midway backup type is that it is not necessary to completely lock down the end user. However, the disadvantage is that the dataset cannot be modified while the backup is taking place, which may make this type of backup unsuitable for some applications. Failure to modify data during backup may cause performance problems.

1.2 how to back up

1.2.1 Cold backup

The simplest backup method is to shut down the MySQL CVM, and then copy and save all the files under the data directory. When you need to restore, you can copy the directory to the machine that needs to be restored. This method is indeed convenient, but it has little effect in the production environment. Because all machines provide services, and even Slave sometimes needs to provide read-only services, it is not realistic to turn off MySQL outage backup. A concept corresponding to cold backup is hot backup, which is a backup without affecting the external service of MySQL.

Cold backup and stop the business for backup.

1.2.2 Snapshot backup

The first hot backup to be introduced is snapshot backup, which refers to backing up the database through the snapshot feature supported by the file system. The principle of backup is to put all the database files in the same partition, and then perform snapshot work on that partition. For Linux, it needs to be implemented through LVM (Logical Volumn Manager). LVM uses replication-on-write (copy-on-write) technology to create snapshots, for example, a logical copy of an entire volume that is similar to the MVCC of the innodb storage engine in the database, except that the snapshot of LVM is at the file system level, while MVCC is at the database level and only supports the innodb storage engine.

LVM has a snapshot reserved area, and if the original volume data changes, LVM guarantees that the affected block will be copied to the snapshot reserved area before any changes are written. Simply put, all the consistent old data at the beginning of the snapshot point is retained in the snapshot area. For databases with few updates, snapshots can also be very small.

For MySQL, in order to use snapshot backup, you need to put the data files and log files in a logical volume, and then take a snapshot backup of that volume. Because snapshots can only be backed up locally, if the local disk is corrupted, the snapshot is corrupted. Snapshot backup is more inclined to prevent misoperation, which can quickly restore the database to the point in time generated by the snapshot, and then combined with binary logs can be restored to the specified point in time. The basic principles are as follows:

1.2.3 logical backup (text: SQL statement)

Cold backup and snapshot backup are seldom used in production environment because of their disadvantages, and more logical backup and physical backup tools are used in MySQL. This section focuses on logical backup. MySQL officially provides Mysqldump logical backup tool, which is good enough, but it has the problem of slow single-thread backup. A better logical backup tool mydumper is provided in the community, and its advantage is mainly reflected in multi-thread backup and faster backup speed.

1.2.4 other common backup methods

Physical backup (binary copy of data file)

Full backup concept

Total data is all the data in the database (or all the data in a database).

Full backup is to back up all the data in the database.

Mysqldump will get consistent data at a time.

Incremental backup (refresh binary log)

Incremental data refers to the data updated by the database from the last full backup to the next full backup.

For mysqldump,binlog, it is incremental data.

1.2.5 introduction of backup tools

1. Mysqldump: mysql comes with an easy-to-use logical backup tool.

2. Mysqlbinlog: the original command to realize binlog backup

3. Xtrabackup: a high-performance physical backup tool developed by precona.

1.3 introduction to mysqldump backup

The basic process of backup is as follows:

1. Call FTWRL (flush tables with read lock) to disable reading and writing 2. Turn on snapshot reading and take the snapshot at this time (only for innodb tables) 3. Backup non-innodb table data (* .frm, * .myi,*.myd, etc.) 4. After the non-innodb table is backed up, release the FTWRL lock 5. 0. Back up the innodb data one by one. Backup complete.

The whole process, you can refer to a picture of my colleague, but his picture only considers the backup of the innodb table. In fact, before the completion of the unlock tables execution, the non-innodb table has been backed up, and the following T1 Magi T2 and T3 are essentially innodb tables. And mysqldump of 5.6 uses the SavePoint mechanism to release the MDL lock on a table every time a table is prepared to avoid a longer lock on a table.

1.3.1 mysqldump backup process

1.3.2 commonly used backup parameters

Parameter description

-A back up the whole database

-B prepare all tables under a database

-R,-- routines backup stored procedure and function data

-- triggers backup trigger data

-- master-data= {1 | 2} tells you that if the binlog position at the moment after backup is equal to 1, it will be printed as a CHANGE MASTER command; if equal to 2, the command will be prefixed with a comment symbol.

-- single-transaction provides hot backup for innodb engine

-F,-- flush-logs refreshes the binlog log

-x,-- lock-all-tables locks all tables in all databases. This is achieved by using global read locks throughout the dump.

-l,-- lock-tables locks all tables for reading

-d table structure only

-t data only

-- compact reduces useless data output (debugging)

A complete backup statement: the backup command for the innodb engine is as follows: mysqldump-A-R-- triggers-- master-data=2-- single-transaction | gzip > / opt/all$ (date +% F). The backup command for sql.gz suitable for multi-engine mixing (for example, myisam and innodb mix) is as follows: mysqldump-A-R-- triggers-- master-data=2 | gzip > / opt/all$ (date +% F). Sql.gz

1.3.3-A parameter

Back up the whole library, backup statement

Mysqldump-uroot-p123-A > / backup/full.sql

1.3.4-B parameter

Prepare all the tables under a database

Add statements to create create and "use library". You can directly connect multiple library names and back up multiple libraries *-B library 1 library 2

Mysqldump-uroot-P123-B world > / backup/worldb.sql

Backup statement:

Use db1 drop table create table insert into exists in create database if not

When backing up the database without adding-B, only all tables under the database are backed up, and the database is not created.

Only individual databases can be backed up (generally used when backing up a single table) mysqldump-uroot-p123 world > / backup/world.sql backup single table mysqldump-uroot-p123 world city > / backup/world_city.sql for single table backup granularity, the speed of restoring database data is the fastest. Backup multiple tables mysqldump Library 1 Table 1 Table 2 Table 3 > Library 1.sqlmysqldump Library 2 Table 2 Table 3 > Library 2.sql

Sub-library backup: for cycle

Mysqldump-uroot-paired mysql123'-B mysql... mysqldump-uroot-paired mysql123'-B mysql_utf8... mysqldump-uroot-paired mysql123'-B mysql.

Sub-library backup

For name in `mysql-e "show databases;" | sed 1d`do mysqldump-uroot-paired mysql123'-B $namedone

1.3.5-- master-data= {1 | 2} parameter

Tell you the binlog location at the moment after backup

2 is comment 1 is non-comment, to be executed (master-slave copy)

[root@db02 logs] # sed-n '22p' / opt/t.sqlCHANGE MASTER TO MASTER_LOG_FILE='clsn-bin.000005', MASTER_LOG_POS=344; [root@db02 logs] # mysqldump-B-master-data=2 clsn > / opt/t.sql

1.3.6-- single-transaction parameter

Hot standby for innodb engine

Only the innodb engine can use this parameter to open a single transaction for backup, which is realized by using the snapshot technology of the transaction.

Based on transaction engine: consistent backups can be obtained without locking tables.

Reflects the isolation of the four major features of ACID, 99% of production uses innodb transaction engine.

Although hot backup is supported, it does not mean that you can make backups at any time, especially during busy periods. Do not make backup strategies, but generally do backups at night.

The backup commands for the innodb engine are as follows:

Mysqldump-A-B-R-triggers-- master-data=2-- single-transaction | gzip > / opt/all.sql.gz

1.3.7-flush-logs parameter /-F

Refresh the binlog log

Back up the database at 0: 00 every night

Mysqldump-A-B-F > / opt/$ (date +% F) .SQL [root@db02 ~] # ll / application/mysql/logs/-rw-rw---- 1 mysql mysql 168 Jun 21 12:06 clsn-bin.000001-rw-rw---- 1 mysql mysql 168 Jun 21 12:06 clsn-bin.000002-rw-rw---- 1 mysql mysql 210 Jun 21 12:07 clsn-bin.index hint: each library will be refreshed.

1.3.8 compressed backup

Compress backup command:

Mysqldump-B-master-data=2 clsn | gzip > / opt/t.sql.gz

Decompress:

Zcat t.sql.gz > t1.sqlgzip-d t.sql.gz # delete zip package gunzip alL_2017-12-22.sql.gz

A complete backup statement

The backup command for the innodb engine is as follows: mysqldump-A-R-- triggers-- master-data=2-- single-transaction | gzip > / opt/all.sql.gz is suitable for multi-engine mixing (for example, myisam and innodb mix) as follows: mysqldump-A-R-- triggers-- master-data=2 | gzip > / opt/alL_$ (date +% F). Sql.gz

1.3.9 restore practice with Mysqldump backup

Back up the innodb engine database clsn and compress:

Mysqldump-B-R-- triggers-- master-data=2 clsn | gzip > / opt/all_$ (date +% F) .sql.gz

Delete the clsn database artificially:

[root@db02 opt] # mysql-e "drop database clsn;" [root@db02 opt] # mysql-e "show databases;"

Restore the database:

Use gzip to extract gzip-d xxx.gzshell > mysql set sql_log_bin=0;Query OK, 0 rows affected (0.00 sec) mysql > source / backup/alL_2017-12-22.sql

Verify the data:

[root@db02 opt] # mysql-e "use clsn;select * from test;"

1.4 [simulated] incremental recovery enterprise case

1.4.1 prerequisites:

1. Full backup (mysqldump) is available.

two。 In addition to full backups, there are all binlog incremental logs generated after full backups.

1.4.2 Environmental preparation

(1) prepare the environment:

Drop database clsn;CREATE DATABASE clsn;USE `clsn`; CREATE TABLE `test` (`id` int (4) NOT NULL AUTO_INCREMENT, `name` char (20) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8;INSERT INTO `test` VALUES (1), (2)), (3)), (4)), (5))

View the created data

Mysql > select * from test;+----+-+ | id | name | +-- +-+ | 1 | clsn | | 2 | znix | | 3 | inca | 4 | zuma | | 5 | kaka | +-+-+ 5 rows in set (0.00 sec)

(2) simulated environment:

Mkdir / data/backup-pdate-s "2017-12-22"

Full backup:

Mysqldump-B-master-data=2-single-transaction clsn | gzip > / data/backup/clsn_$ (date +% F) .sql.gz

Simulation increment:

Mysql-e "use clsn;insert into test values (6 select use clsn;insert into test values);" mysql-e "select * from clsn.test;"

(3) simulated erroneous deletion of data:

Date-s "11:40 on 2017-12-22" mysql-e "drop database clsn;show databases;"

Ten minutes after the problem, the problem was found and the database was deleted.

1.4.3 restore data preparation

(1) use iptables firewall to block all application writes.

[root@clsn] # iptables-I INPUT-p tcp-- dport 3306!-s 172.16.1.51-j DROP # clsn_2017-12-22.sqlsed-n '22p' clsn_2017-12-22.sqlmysqlbinlog-d clsn--start-position=339 clsn-bin.000008-r bin.sql

Logs that need to be restored:

1.clsn_2017-12-22.sql2.bin.sqlgrep-I drop bin.sql sed-I'/ ^ drop.*/d' bin.sql

1.4.4 data recovery

Recover data

[root@db02 backup] # mysql select * from test;+----+-+ | id | name | +-- +-+ | 1 | clsn | | 2 | znix | | 3 | inca | | 4 | zuma | | 5 | kaka | +-+-+ 5 rows in set (0.00 sec)

Restore incremental data:

[root@db02 backup] # mysql clsn select * from test;+----+-+ | id | name | +-- +-+ | 1 | clsn | 2 | znix | | 3 | inca | 4 | zuma | | 5 | kaka | | 6 | | 7 | hehe | +-- +-- + 7 rows in set (0.00 sec)

Recovery complete.

Adjust the iptables to allow user access.

1.4.5 more than one binlog question

Mysqlbinlog-d clsn--start-position=339 clsn-bin.000009 clsn-bin.0000010-r bin1.sqlmysql clsn TB level

1.6.1 large amount of data and small amount of transformation

(1) the cost of full backup is high. Mysqldump+binlog implements full plus incremental backup, but the disadvantage is that the recovery cost is higher than the backup time cost. (2) xtrabackup: you can make a full backup for a long time, and the rest of the time is incremental. The cost of full backup space is very high. If the amount of data is at the level of 30GFT-> TB, xtrabackup is recommended.

1.6.2 small amount of data and large amount of change

Only the cost of time needs to be considered.

You can only use a full backup, and you can choose both tools. Xtrabackup is less expensive to recover.

1.6.3 the amount of data and changes are large.

Both the time cost and the space cost should be considered.

When the amount of data reaches PB or higher (facebook), mysqldump may become the first choice, which takes up less space, but the technical cost is high. Need to carry on the secondary development to mysqldump (the company with large data volume is the first choice).

1.7 xtrabackup backup softwar

Percona company's official website https://www.percona.com/

1.7.1 introduction to Xtrabackup

Xtrabackup is a free database hot backup software opened by percona, it can back up InnoDB database and XtraDB storage engine database non-blocking (also need table lock for MyISAM backup); mysqldump backup mode is logical backup, its biggest defect is that backup and recovery speed is slow, if the database is greater than 50GB mysqldump backup is not suitable.

There are 4 executable files after Xtrabackup installation, among which two important backup tools are innobackupex and xtrabackup.

1) xtrabackup is specially used to back up InnoDB tables and does not interact with mysql server; 2) innobackupex is a Perl script that encapsulates xtrabackup and supports backing up innodb and myisam at the same time, but a global read lock is required when backing up myisam. 3) xbcrypt encryption and decryption backup tool 4) xbstream streaming package transfer tool, similar to tar5) physical backup tool, based on the amount of data at the same level, is much better than logical backup, especially when the amount of data is large.

1.7.1 Xtrabackup benefits

1) Fast backup and reliable physical backup

2) the backup process will not interrupt the transaction in progress (no need to lock the table)

3) can save disk space and traffic based on compression and other functions

4) automatic backup check

5) Fast reduction speed

6) it can be spread and transferred to another machine.

7) backup data without increasing the load on the CVM

8) physical backup tools, based on the amount of data at the same level, perform much better than logical backups. Dozens of gigabytes to no more than the TB level. However, at the same data volume level, physical backup has certain advantages in restoring data.

1.7.2 backup principle

Copy data files, copy data pages

Hot backup can be implemented for innodb tables.

(1) when there is still a modification operation in the database, prepare the data file directly. At this time, the data backed up is inconsistent with the current mysql. (2) prepare both redo and undo in the backup process. (3) in order to restore, as long as the backed up data page lsn matches redo lsn, the consistent data will be restored in the future. Redo application and undo application.

Automatic locking of table copy files is implemented for myisam tables.

At the beginning of the backup, a background detection process will be started to detect mysql redo changes in real time. Once new log writes are found, the log will be recorded in the background log file xtrabackup_log immediately, and then the innodb data file-system tablespace file ibdatax will be copied. After the copy is finished, flush tables with readlock will be executed, then .frm MYI MYD and other files will be copied, finally unlock tables will be executed, and finally xtrabackup_log will be stopped.

1.7.3 installation of xtrabackup

Installation dependency

Wget-O / etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repoyum-y install perl perl-devel libaio libaio-devel perl-Time-HiRes perl-DBD-MySQL download package And install the software wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.4/binary/redhat/6/x86_64/percona-xtrabackup-24-2.4.4-1.el6.x86_64.rpmyum-y install percona-xtrabackup-24-2.4.4-1.el6.x86_64.rpm

1.8 xtrabackup practices

1.8.1 full backup and recovery

This phase starts the innodb instance embedded in xtrabackup, plays back the xtrabackup log xtrabackup _ log, applies committed transaction information changes to innodb data / tablespaces, and rolls back uncommitted transactions (a process similar to innodb instance recovery). The recovery process is shown in the following figure:

Backup

Create a backup directory

Mkdir / backup-p for the first full backup innobackupex-- defaults-file=/etc/my.cnf-- user=root-- password=123-- socket=/application/mysql/tmp/mysql.sock-- no-timestamp / backup/xfull

Preparation before recovery

Preparation before restoring data (merging xtabackup_log_file and backed up physical files)

Innobackupex-- apply-log-- use-memory=32M / backup/xfull/ View the merged checkpoints in which the type changed to full-prepared is recoverable. [root@db02 full] # cat xtrabackup_checkpoints backup_type = full-preparedfrom_lsn = 0to_lsn = 4114824last_lsn = 4114824compact = 0recover_binlog_info = 0

Destroy the database data file

[root@db02 full] # cd / application/mysql/data/ [root@db02 data] # lsauto.cnf db02.pid ibdata2 mysql mysql-bin.index worldclsn ib_logfile0 mysql-bin.000001 oldboydb02.err ibdata1 ib_logfile1 mysql-bin.000002 performance_ schemas [root @ db02 data] #\ rm-rf. / * [root@db02 data] # ls [root@db02 data] # killall mysql

Recovery method

Method 1: copy the backup file directly

Cp-a / backup/full/ / application/mysql/datachown-R mysql.mysql / application/mysql/data

Method 2: restore using the innobackupex command (recommended)

[root@db02 mysql] # innobackupex-- copy-back / backup/xfull [root@db02 mysql] # chown-R mysql.mysql / application/mysql/: no matter which recovery method is used, you need to change to the owner of the group and keep it consistent with the program. [root@db02 data] # cd / application/mysql/data/ [root@db02 data] # lsclsn ibdata2 ibtmp1 performance_schema xtrabackup_info ib_logfile0 mysql worldibdata1 ib_logfile1 oldboy xtrabackup_binlog_pos_innodb launch is the database [root@db02 data] # / etc/init.d/mysqld start

1.8.2 incremental backup and recovery

The "incremental" processing in the process of innobackupex incremental backup is mainly that it is still a full copy (full backup) for myisam and other storage engines relative to innodb.

The process of "incremental" backup is mainly by copying "pages" with changes in innodb (these changed data pages mean that the LSN of "pages" is greater than the LSN given in xtrabackup_checkpoints). Incremental backup is based on complete, the first additional data must be based on the previous complete, and then each additional is based on the last addition, and finally achieve a consistent increase. The process of incremental backup is as follows, which is similar to the complete process, except in step 2.

Where do incremental backups increment?

Increments based on the previous backup.

Redo is a set of two files by default and has a fixed size. The file it uses is a way of polling, it is not permanent, and the file may be overwritten at any time.

Note: never make a backup when the business is busy.

What is backed up?

1. You can use binlog as an increment

2. The self-contained incremental backup is based on the data pages that have changed since the last backup, and also back up the changes of undo and redo during the backup.

How to back up

1. Make the first full innobackupex-- user=root-- password=123-- no-timestamp / bakcup/xfull to modify the original library, modify the line of Xiaohong, and then commit. 2. Perform incremental backup innobackupex-- user=root-- password=123-- incremental--no-timestamp-- incremental-basedir=/backup/xfull/ / backup/xinc1

How to recover

1. Apply the full log first (--apply-log, there is no need to do a rollback operation-- redo-only)

Innobackupex-apply-log-redo-only / backup/xfull/

2. Merge increments into complete (consistent merge)

Innobackupex-- apply-log-- incremental-dir=/backup/xinc1 / backup/xfull/innobackupex-- apply-log / backup/xfull

3. Restore after the completion of the merge

Method 1: copy the backup file directly

Cp-a / backup/full/ / application/mysql/datachown-R mysql.mysql / application/mysql/data

Method 2: restore using the innobackupex command (recommended)

[root@db02 mysql] # innobackupex-- copy-back / backup/xfull [root@db02 mysql] # chown-R mysql.mysql / application/mysql/: no matter which recovery method is used, you need to change to the owner of the group and keep it consistent with the program.

1.8.3 Database backup strategy

Do it once a week on Sundays; do a daily increment every day from Monday to Saturday and poll once a week.

Xfull-- apply-log-- redo-only guarantees last-lsn= Monday increment starts lsnxinc1 merge Monday increment to full, and apply-log-- redo-only guarantees last-lsn= Tuesday increment starts lsnxinc2 merger Tuesday increment to full, and apply-log-- redo-only guarantees last-lsn= Wednesday increment starts lsnxinc3 merger Wednesday increment to full availability. And apply-log-- redo-only guarantee last-lsn= Thursday increment start lsnxinc4 merge Thursday increment to full, and apply-log-- redo-only guarantee last-lsn= Friday increment start lsnxinc5 merge Friday increment to full, and apply-log-- redo-only guarantee last-lsn= Saturday increment start lsnxinc6 merge Saturday increment to full, apply-log is ready to resume

1.8.4 case analysis of real production

Background: the core system of a logistics company's website, the amount of data is 220g, and the daily update quantity is 100M-200M.

Backup scheme: xtrabackup complete + increment

Backup strategy (crontab):

1. Be ready at 0: 00 p.m. Saturday.

0 06 zjs_full.sh-this line can be left out

2. Monday to Friday, Sunday is increment, based on the previous day increment

0 10-5 zjs_inc.sh--- can be left out.

Failure scenario:

On Wednesday, at two o'clock in the afternoon, the developer mistakenly deleted a table zjs_base, about 10G.

Project responsibilities:

1) specify a recovery plan and take advantage of existing backups

2) restore erroneous deletion of data

3) formulate the specification of operation and maintenance and development process.

Recovery process:

A) prepare for it all last Saturday. B) merge Sunday, Monday and Tuesday increments. C) the above data should be restored in the test database, and the current state of the data should be restored on Wednesday at 1:00 d) the data state that needs to be restored is around 2: 00 p. M. e) restore from binlog at 1: 00 to the table before deletion f) export the deleted table ZJS _ base, restore to the production database to verify data availability and integrity. G) start the application to connect to the database

Summary: the mistakenly deleted table was restored after 30 minutes. The service was stopped for a total of 40 minutes.

1.8.5 Fault recovery summary

Recovery ideas: 1. First of all, make sure that all applications are disconnected to ensure the security of the data. 2. Check whether the backup for recovery exists. 3. Design a simple scheme for rapid and safe recovery, and formulate solutions to unexpected problems. Specific recovery process: 1. Prepare to be ready last Saturday, and-- apply-log-- redo-only 2, merge increments, Sunday, Monday, Tuesday-apply-log-- redo-only Wednesday-- apply-log 3, restore the above data in the test bank, the current state of the data should be 1:00 on Wednesday, and the data to be restored is around 2: 00 p.m. The data state before deleting zjs_base is restored from the binlog starting at 1: 00 to the position of the events before deletion. 5. Export the deleted table ZJS _ base, restore to the production database, and verify the availability and integrity of the data. 6. Start the application to connect to the database. Determine how long the recovery window will take? -3 hours is expected to be related to your recovery + verification + unexpected circumstances. How long does the business stop? -> 6 hours? Or more? Less?

1.8.6 [simulated] production accident recovery

Data creation phase

1. Create the directory required for backup

Mkdir full inc1 inc2

2. Full availability on Sunday

Innobackupex-user=root-password=123-no-timestamp / backup/xbackup/full/

3. Simulated data change

Use oldboycreate table test (id int,name char (20), age int); insert into test values (8); insert into test values (9); commit

4. Incremental backup on Monday

Innobackupex-user=root-password=123-incremental--no-timestamp-incremental-basedir=/backup/xbackup/full/ / backup/xbackup/inc1

5. Simulated data change

Use oldboyinsert into test values (8); insert into test values (9)

6. Incremental backup on Tuesday

Innobackupex-user=root-password=123-incremental--no-timestamp-incremental-basedir=/backup/xbackup/inc1 / backup/xbackup/inc2

Then insert a new row operation

Use oldboy

Insert into test values (10 and 19)

Insert into test values (11 minutes outdated 2 minutes 10)

Commit

Simulated misoperation accident

Simulation scene, delete the test table by mistake at 2: 00 p. M. on Tuesday

Use oldboy; drop table test

Prepare to recover data

1. Prepare xtrabackup backup, merge backup

Innobackupex-apply-log-redo-only / backup/xbackup/fullinnobackupex-apply-log-redo-only-incremental-dir=/backup/xbackup/inc1 / backup/xbackup/fullinnobackupex-apply-log-incremental-dir=/backup/xbackup/inc2 / backup/xbackup/fullinnobackupex-apply-log / backup/xbackup/full

2. Confirm the starting point of binlog and prepare to intercept binlog.

Cd / backup/xbackup/inc2/ cat xtrabackup_binlog_info mysql-bin.000001 1121

3. Intercepts the binlog before the drop operation

Mysqlbinlog-- start-position=1121 / tmp/mysql-bin.000003 finds the event and postion numbers before drop to intercept logs. If 1437 mysqlbinlog-- start-position=1121-- stop-position=1437 / tmp/mysql-bin.000003 > / tmp/incbinlog.sql

4. Close the database and back up the binary log

/ etc/init.d/mysqld stopcd / application/mysql/data/cp mysql-bin.000001 / tmp

5. Delete all MySQL data

Cd / application/mysql/data/rm-rf *

Recover data

1. Restore the fully backed up data to the data directory

Innobackupex-- copy-back / backup/xbackup/full/chown-R mysql.mysql / application/mysql/data//etc/init.d/mysqld start

two。 Restore binlog record

Set sql_log_bin=0source / tmp/incbinlog.sql

1.8.7 xtarbackup Export

(1) the export table exports the table during the prepare phase of the backup, so once the full backup is complete, you can export a table with the-- export option during the prepare process:

Innobackupex-apply-log-export / path/to/backup

This command creates a file ending in .exp for the tablespace of each innodb table, and these files ending in .exp can be used to import to other cloud servers.

(2) "Import" Table to import an innodb table from another mysql CVM, you need to create a table on the current CVM with the same structure as the original table, and then import the table:

Mysql > CREATE TABLE mytable (...) ENGINE=InnoDB

Then delete the tablespace for this table:

Mysql > ALTER TABLE mydatabase.mytable DISCARD TABLESPACE

Next, copy the mytable.ibd and mytable.exp files from the mytable table of the CVM from the "Export" table to the data directory of the current CVM, and then use the following command to "import" them: (remember to change permissions)

Mysql > ALTER TABLE mydatabase.mytable IMPORT TABLESPACE

Example:

Innobackupex-- user=root-- password=123-- no-timestamp / backup/xbackup/full/ go to the full database directory [root@db02 ] # lsdb.opt PENALTIES.frm PENALTIES.ibd PLAYERS.frm PLAYERS.ibd [root@db02 ] # pwd/backup/xbackup/full/ export table [root@db02 ] # innobackupex-- apply-log-- export / backup/xbackup/full/ [root@db02 ] # lsdb.opt PENALTIES.exp PENALTIES. Ibd PLAYERS.exp PLAYERS.ibdPENALTIES.cfg PENALTIES.frm PLAYERS.cfg PLAYERS.frm creates the same structure table CREATE TABLE `PLAYERS` (`PLAYERNO` int (11) NOT NULL) `NAME` char (15) NOT NULL, `INITIALS` char (3) NOT NULL, `BIRTH_ DATE` date DEFAULT NULL, `SEX` char (1) NOT NULL, `JOINED` smallint (6) NOT NULL, `STREET` varchar (30) NOT NULL, `HOUSENO` char (4) DEFAULT NULL, `POSTCODE` char (6) DEFAULT NULL, `TOWN` varchar (30) NOT NULL, `PHONNNO` char (13) DEFAULT NULL, `LEAGUENO` char (4) DEFAULT NULL PRIMARY KEY (`PLAYERNO`) ENGINE=InnoDB DEFAULT CHARSET=utf8 copy recovery data to the library [root@db02 ] # cp PLAYERS.ibd PLAYERS.exp / application/mysql/data/backup/cp: overwrite `/ application/mysql/data/backup/PLAYERS.ibd'? Y restore data mysql > ALTER TABLE backup.PLAYERS DISCARD TABLESPACE;Query OK, 0 rows affected (0.00 sec)

1.8.8 innobackupex parameter description

Parameter description

-- compress this option represents a backup of compressed innodb data files.

-- compress-threads this option represents the number of worker threads compressed in parallel.

-- compress-chunk-size this option represents the size of each compressed thread worker buffer, in bytes. The default is 64K.

-- encrypt this option means to encrypt the backup of innodb data files through ENCRYPTION_ALGORITHM 's algorithm. Currently, the supported algorithm is ASE128,AES192,AES256.

-- encrypt-threads this option represents the number of worker threads encrypted in parallel.

-- encrypt-chunk-size this option represents the size of each encrypted thread worker buffer, in bytes. The default is 64K.

-- encrypt-key this option uses the appropriate length to encrypt key, which is not recommended because it is logged to the command line.

-- encryption-key-file this option indicates that the file must be a simple binary or text file, and encrypted key can be generated with the following command-line command: openssl rand-base64 24.

-- include this option indicates that the name of the table [db.tb] is matched using a regular expression, requiring it to be specified to match the full name of the table to back up, that is, databasename.tablename.

-- user this option indicates a backup account.

-- password this option represents the backup password.

-- port this option indicates the port where the database is backed up.

-- host this option indicates the address of the backup database.

-- databases the parameter accepted by this option is the data name. If you want to specify multiple databases, they need to be separated by spaces, such as "xtra_test dba_test". At the same time, when you specify a database, you can specify only one of the tables. Such as "mydatabase.mytable". This option is not valid for innodb engine tables, or will back up all innodb tables. In addition, this option can also accept a file as a parameter, with each behavior in the file having an object to back up.

-- tables-file this option indicates that the file containing the list of tables is specified in the format database.table, which is passed directly to-- tables-file.

-- socket this option indicates the location of the mysql.sock so that the backup process can log in to mysql.

-- no-timestamp this option can indicate that instead of creating a timestamp directory to store backups, specify to the backup folder you want.

-- ibbackup this option specifies which xtrabackup binary to use. IBBACKUP-BINARY is the command to run percona xtrabackup. This option applies to xtrbackup binaries that are not in your search and working directory. If this option is specified, innoabackupex automatically determines which binaries to use.

-- slave-info this option means to use when backing up slave, printing out the name of master and binlog pos, and also writing this information to the xtrabackup_slave_info file with the command of change master. You can start a slave library based on this backup.

-- safe-slave-backup this option means that to ensure consistent replication status, this option stops the SQL thread and starts the backup when the slave_open_temp_tables in show status is 0. If the temporary table is not opened, bakcup will start immediately, otherwise the SQL thread starts or shuts down the temporary table that is not open. If slave_open_temp_tables is not 0 after-- safe-slave-backup-timeount (default 300 seconds), the slave sql thread will restart when the backup is complete.

-- kill-long-queries-timeout this option represents the number of seconds to wait between the start of FLUSH TABLES WITH READ LOCK execution and the time kill drops the queries that block it. The default value is 0 and no queries are kill. Using this option, xtrabackup requires Process and super permissions.

-- kill-long-query-type this option indicates the type of kill. The default is all. Optional select.

-- ftwrl-wait-threshold this option indicates that a long query is detected in seconds and represents the threshold of the long query.

-- ftwrl-wait-query-type this option means that that kind of query is allowed to complete before the global lock is obtained. The default is ALL, and optional update.

-- galera-info this option means that a file xtrabackup_galera_info file containing the status of the local node at the time the backup was created is generated, and this option applies only to backup PXC.

-- stream this option indicates the format of streaming backup. After backup is completed, it will be in the specified format to STDOUT. Currently, only tar and xbstream are supported.

-- defaults-file this option specifies which file to read the MySQL configuration from, and must be placed at the location of the first option on the command line.

-- defaults-extra-file this option specifies which additional file to read the MySQL configuration from before the standard defaults-file, which must be the location of the first option on the command line. A profile that is typically used to store the user name and password of the backup user.

-defaults-group this option represents the group read from the configuration file, which is used when deploying multiple instances of innobakcupex.

-- no-lock this option means to close the table lock of FTWRL. Only if all tables are Innodb tables and do not care about the binlog pos points of backup, if any DDL statements are being executed or non-InnoDB is being updated (including tables under the mysql library), this option should not be used. The result is that the backup data is inconsistent. If you consider that the backup failed to acquire the lock, you can consider-safe-slave-backup immediately stops the replication thread.

-- tmpdir this option indicates that when you specify-- stream, specify where the temporary file is stored, and the transaction log is first stored in the temporary file before streaming and copying to the remote server. In the use of parameter stream=tar backup, your xtrabackup_logfile may be temporarily placed in the / tmp directory, if you backup and write large xtrabackup_logfile may be very large (5G +), it is likely to fill your / tmp directory, you can use the parameter-tmpdir to specify the directory to solve this problem.

-- history this option indicates that the backup history of percona server is in the percona_schema.xtrabackup_ calendar table.

-- incremental this option means to create an incremental backup and you need to specify-- incremental-basedir.

-- incremental-basedir this option means that a string parameter is accepted to specify the directory containing full backup as the base directory for incremental backup, used in conjunction with-- incremental.

-- incremental-dir this option represents the directory for incremental backups.

-- incremental-force-scan this option means that data pages in all incremental backups are forced to be scanned when an incremental backup is created.

-- incremental-lsn this option indicates the LSN of the specified incremental backup, used with the-- incremental option.

-- incremental-history-name this option represents the name of the history stored in PERCONA_SCHEMA.xtrabackup_history based on incremental backups. Percona Xtrabackup searches the history table to find the most recent (innodb_to_lsn) successful backup and starts the accident lsn with the to_ LSN value as an incremental backup. Mutually exclusive with innobackupex--incremental-history-uuid. If no valid lsn,xtrabackup is detected, error is returned.

-- incremental-history-uuid this option represents the UUID stored in percona_schema.xtrabackup_history based on specific history of incremental backups.

-- close-files this option closes a file handle that is no longer accessed. When xtrabackup opens a tablespace, it usually does not close the file handle in order to handle DDL operations correctly. If the number of tablespaces is large, this is a way to close file handles that are no longer accessed. Using this option is risky and there is the possibility of inconsistent backups.

-- compact this option means to create a compact backup without a secondary index.

-- throttle this option represents the number of IO operations per second and is valid only for the bakcup phase. Apply-log and-- copy-back don't work together.

After reading this article on how Mysql and xtrabackup implement backup and recovery, many readers will want to know more about it. If you need more industry information, you can follow our industry information section.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report