Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql backup of Relational Database (5)

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Brief binary log:

Binary logs are usually used as an important resource for backup, so before backing up, let's review the contents of binary logs discussed in the previous topic.

1. Binary log content

Any action that causes the mysql server to change.

The replication function depends on this log.

The slave server completes master-slave replication through the binary log of the master server and saves it in the relay log before execution.

From the server, you can usually turn off binary logs to improve performance.

two。 The format representation of the binary file:

By default, there are binaries for mysql-bin.000001,mysql-bin.00002 in the installation directory

There is also mysql-bin.index to record the list of binary files managed by mysql

If you need to delete binary logs, do not directly delete binaries, which can lead to confusion in mysql management.

3. The binaries view the relevant mysql commands.

# > SHOW MASTER STATUS; to view the binaries in use

Mysql > SHOW MASTER STATUS +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +-- -+-+ | mysql-bin.000007 | 22094 | +-+

# > FLUSH LOGS; manually scrolling binary logs

Mysql > FLUSH LOGS;mysql > SHOW MASTER STATUS +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +-- -+-+ | mysql-bin.000008 | 107 | +-# after scrolling Mysql recreates a new log mysql-bin.000008

# > SHOW BINARY LOGS displays used binary log files.

Mysql > SHOW BINARY LOGS +-+-+ | Log_name | File_size | +-+-+ | mysql-bin.000001 | 126 | mysql-bin.000002 | 2576 | mysql-bin.000003 | mysql-bin.000004 | 126 | mysql- Bin.000005 | 126 | mysql-bin.000006 | 126 | mysql-bin.000007 | 22137 | mysql-bin.000008 | 107th | +-+-+

# > SHOW BINLOG EVENTS; views binaries in table form

SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count] mysql > SHOW BINLOG EVENTS IN' mysql-bin.000001'\ G * * 1. Row * * Log_name: mysql-bin.000001 Pos: 4 Event_type: Format_desc Server_id: 1End_log_pos: 107 Info: Server ver: 5.5.33-log, Binlog ver: 4

4.MySQL binary Log File Reading tool mysqlbinlog

Usage: mysqlbinlog [options] log-files-start-datetime-stop-datetime-start-position-stop-position

Intercept log records at a specified location

# mysqlbinlog-- start-position 15642-- stop-position 15643 / mydata/data/mysql-bin.000001-- intercepts the result as follows: # at 15642 / 160612 16:56:52 server id 1 end_log_pos 15760 Query thread_id=6 exec_time=0 error_code=0use `hellodb` / *! * / According to the second line of the above intercepted result, explain the binary log content 1) time point: 160612 16:56:52 2) server ID: server id 1 server ID is mainly used to mark the server generated by the log, mainly used in the two-master model, each other as the master and slave Ensure that binaries are not copied in a loop. 3) record type: Query 4) thread number: thread_id = 65) timestamp of the statement and time difference between writing to the binary log file Exec_time=0 6) event content 7) event location # at 15642 8) error code error_code=0 9) event end location end_log_pos, where the next event begins

5. Binary log format

Defined by bin_log_format= {statement | row | mixed}

1) statement: record the statement that generates data based on the statement

# the disadvantage is that if the insertion information is generated by the function at that time, the execution result may be different at different points in time.

For example:

Mysql > INSERT INTO tb1 VALUE (CURRENT_DATE ())

2) row: based on row data

The disadvantage is that sometimes the amount of data is too large.

3) mixed: mixed mode, and mysql decides when to use statement and when to use row mode

6. Summary of binary related parameters:

1) log_bin = {ON | OFF} can also be a file path, which is mainly used to control the location where the global binlog is stored and whether it is enabled. 2) whether the log_bin_trust_function_creators is recorded in 3) sql_log_bin = {ON | OFF} whether the binlog is turned off at the session level If you close operations within the current session, it will not record 4) whether sync_binlog synchronizes transaction operations to the binary log immediately. 5) binlog_format = {statement | row | mixed} binary log format, mentioned separately above 6) max_binlog_cache_size = binary log buffer space, only used to buffer statements of transaction class 7) max_binlog_stmt_cache_size = statement buffer, space shared by non-transaction and transaction classes 8) max_binlog_size = the upper limit of binary log files, if the limit is exceeded, scroll 9) delete binary log PURGE {BINARY | MASTER} LOGS {TO 'log_name' | BEFORE datetime_expr}

Delete binary log example:

Mysql > PURGE BINARY LOGS TO 'mysql-bin.000001';mysql > PURGE BINARY LOGS BEFORE' 2016-06-12 00001

Tips: never put binary logs and data files on the same device and in the same directory.

Binary log backup and recovery:

Why do backups:

1. Disaster recovery

two。 Audit, what was the database like at a certain point in the past?

3. test

What is the purpose of the backup?

1. Used for recovery, such as erroneous deletion of data, database corruption.

two。 After the end of the backup, the recovery test needs to be done periodically to ensure the integrity of the backed up data.

Backup type:

1. Depending on whether the server is online at the time of backup

1) cold backup: the server is offline and the read and write operations cannot be performed.

2) warm backup: lock sharing lock is imposed globally, which can only be read but not written

3) Hot backup (hot backup): the database is online, and the reading and writing are still going on.

two。 Classify according to the dataset at the time of backup

1) full backup (full backup)

2) partial backup (partial backup)

3. Based on the interface at the time of backup

1) physical backup (physical backup): directly assign data files, package and archive

Features:

No additional tools are needed, just archive commands directly, but it can be poor across platforms. If the amount of data exceeds dozens of gigabytes, it can be used for physical backup.

2) logical backup (logical backup): save data extraction in sql script

Features:

Can be edited using a text editor

Easy to import, just read the sql statement directly

The recovery time of logical backup is slow and takes up a lot of space.

The accuracy of floating point numbers cannot be guaranteed.

The index needs to be rebuilt after restoring the database.

4. Based on backing up the entire data or changing the data

1) full backup of full backup

2) incremental backup incremental backup

Start backing up a piece of data at different points in time

Save space.

3) differential backup differential backup

5. For backup strategy, the following factors need to be considered:

Backup mode

Backup practice

Backup cost

Lock time

Length of time

Performance overhead

Recovery cost

Recovery time

Can tolerate the amount of data lost

6. Backup content

1) data in the database

2) configuration file

3) Code in mysql: stored procedures, stored functions, triggers

4) OS-related configuration files, backup policy scripts in crontab

5) if it is a master-slave replication scene: information related to replication

6) binary log files need to be backed up regularly. Once problems with binary files are found, you need to make a full backup of the data immediately.

7. Common backup tools

1) mysqldump: logical backup tool

Innodb: hot standby, warm standby

MyISAM,Aria: warm standby

Single-threaded backup recovery is slow

2) mysqldumper: multithreaded mysqldump

3) vm-snapshot:

Close to a hot standby tool: because you want to request a global lock first, then create a snapshot, and release the global lock after the snapshot has been created

Use cp, tar and other tools for physical backup

Backup and recovery speed is fast

But it is difficult to implement incremental backups, and the global request needs to wait for a period of time, especially on busy servers

4) back up the data through query statements:

Mysql > SELECT * FROM tb1 INTO OUTFILE'/ path/to/somefile';-Export query data to text file mysql > LOAD DATA INFILE'/ path/from/somefile';-Import text files into some backup tools in the database, do not back up relational definitions, only backup data in tables; logical backup is consolidated faster than mysqldump, because tabular information is not backed up

5) Innobase: commercial backup tool, innobackup

InnoDB hot backup, incremental backup

MyISAM warm backup, does not support increment, only full backup

It belongs to physical backup and is fast.

6) Xtrabackup: an open source backup tool provided by Percona

InnoDB hot backup, incremental backup

MyISAM warm standby, does not support increment

7) mysqlhotcopy: close to cold backup, basically not used.

Basic use of mysqldump tools

1.mysqldump [OPTIONS] database [tables....]

The database must exist when restoring the library, and there is no need to create it manually.

-- all-databases: back up all libraries-- databases db1 db2...: back up specified multiple libraries. If you use this command, you will not need to manually create libraries during recovery-- lock-all-tables: request to lock all tables before backup, warm backup for MyISAM, InnoDB, Aria-- lock-table: lock the tables being backed up, but not recommended, if other tables are modified. Then tables and tables will not be synchronized after backup-single-transaction: can implement hot backup for InnoDB storage engine Start a big thing. Based on MOCC, you can ensure that the table version in the thing is automatically locked without the need for it. Add-- lock-table, you can achieve hot backup code:-- events: backup event scheduler code-- routines: backup stored procedures and stored functions-- triggers: backup trigger backup scroll log:-- flush-logs: scroll log before backup and after request to lock The synchronization location mark when restoring the content after the backup point in time: master server data in the master-slave architecture. The effect is equivalent to marking a point in time. -- master-data= [0 | 1 | 2] 0: not recorded 1: recorded as CHANGE MASTER statement 2: recorded as annotated CHANGE MASTER statement

two。 The brief procedure for backing up using mysqldump is as follows:

1) request lock:-lock-all-tables or use-singe-transaction for innodb hot backup

2) scroll log:-flush-logs

3) Select the database to be backed up:-databases

4) record the binary log file and location:-master-data=

Mysql > FLUSH TABLES WITH READ LOCK

3. Restore:

The recovery process does not need to be written to the binary log

Tips: close binary logs and other user connections

4. Backup strategy: based on mysqldump

Backup: mysqldump+ binary log fil

Make a full backup on Sunday: scroll the log while backing up

Monday to Saturday: backing up binary logs

Restore:

Full backup + events in each binary log text up to now

5. Binary backup recovery use case

1) fully back up the mysql database and restore

Database and data before backup

Mysql > SHOW DATABASES -- query all databases under the current database server +-+ | Database | +-+ | information_schema | | hellodb | | mysql | | performance_schema | | test | +- -enable mysql binary logging function # vim / etc/my.cnf [server] log-bin=/mydata/data/mysql-bin-- modify the configuration file and restart the mysql service # service mysqld restart-- after security considerations, it is best not to use administrator account operations during backup Therefore, we need to create a special user to do mysqldump backup mysql > GRANT SELECT,SHOW DATABASES,LOCK TABLES,RELOAD,EVENT,SUPER ON *. * TO 'sqldump'@'172.16.100.%' IDENTIFIED BY' sqldum'

Then use mysqldump for backup

# mysqldump-- events-- master-data=2-- all-databases-- lock-all-tables-- flush-logs-usqldump-h272.16.100.7-psqldump > / tmp/all_dump_bak.sql above gives sqldump user permission correspondence analysis: SHOW DATABASES-- >-- all-databasesLOCK TABLES-- >-- lock-all-tablesRELOAD-- >-- flush-logsEVENT-- >-- eventsSUPER-- >-- master-data mainly grants SHOW MASTER STATUS permission to insert a row in the hellodb.students table Delete the hellodb library, and then restore mysql > INSERT hellodb.students (Name,Age,Gender) VALUE ('Samlee GZ',28,'M') Mysql > DROP DATABASE hellodb; recovery: at this time, you need to disable the binary logging feature at the session level. The recovery content is not logged mysql > SET SESSION sql_log_bin='OFF';mysql > SOURCE / tmp/all_dump_bak.sql. At this point, the database is restored to the data before the Samlee GZ line is inserted, and then restored through the binary log until the database is deleted. Because the-- master-data option is set, the following line can be found in the backup file-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=107. This line marks the start of the new binary file from that point. By looking at the binary log MySQL _ bin.000010, you can see that you still need to recover the data # mysqlbinlog-- start-position 107-- stop-position 340 / mydata/data/mysql-bin.000010 > / tmp/binlog107_340.sqlmysql > SOURCE / tmp/binlog107_340.sql;mysql > SELECT * FROM hellodb.students. -- query We can see that the last line is the data inserted by ourselves | 26 | Samlee GZ | 28 | M | NULL | NULL |

2) use shell script to automatically schedule backups-mysqldump:

The script reads as follows:

#! / bin/bash### Using mysqldump to backup the all databasesfunction backup {prefix=$1 outputdir=$2 [- d $outputdir] | | (echo "No outputdir Create one! "& mkdir-p $outputdir) now= `/ bin/date +'% Y% baked% dudes% k% M`` mysqldump='/usr/local/mysql/bin/mysqldump' mysqldump-- events-master-data=2-- all-databases-- single-transaction-- flush-logs-usqldump-h272.16.100.7-psqldump > $outputdir/$ {prefix} _ ${now} .sql} function main {case $# in 2) backup $1 $2 ; *) echo 'Usage:. / mysqldump_backup.sh prefix outputdir';; esac} main $* You have new mail in / var/spool/mail/root

Set up periodic task schedule

# crontab-e * / usr/local/mysql/bin/mysqldump_backup.sh back/ tmp/mysqlback# ll / tmp/mysqlback/-- backup every minute-rw-r--r-- 1 root root 0 Jun 14 11:13 back_2016_Jun_14_1113.sql-rw-r--r-- 1 root root 0 Jun 14 11:14 back_2016_Jun_14_1114.sql-rw- Rmuri root root 0 Jun 14 11:15 back_2016_Jun_14_1115.sql-rw-r--r-- 1 root root 0 Jun 14 11:16 back_2016_Jun_14_1116.sql-rw-r--r-- 1 root root 0 Jun 14 11:17 back_2016_Jun_14_1117.sql-rw-r--r-- 1 root root 0 Jun 14 11:18 back_2016_Jun_14_1118.sql-rw-r--r -- 1 root root 0 Jun 14 11:19 back_2016_Jun_14_1119.sql-rw-r--r-- 1 root root 0 Jun 14 11:20 back_2016_Jun_14_1120.sql

Lvm-snapshot: backup based on LVM snapshot

About snapshots:

1. The transaction log and the data file must be on the same volume

two。 The snapshot volume just created does not contain any data, and all the data comes from the original volume.

3. Once the volume data is modified, the modified data will be copied to the snapshot volume, and the data will be accessed-partly from the snapshot volume and partly from the original volume

4. When a snapshot is in use, if the amount of modified data is greater than the snapshot volume capacity, it will cause the snapshot volume to crash.

5. The snapshot volume itself is not a backup, but only provides a practically consistent access directory.

Snapshot-based backups are almost hot spares:

1. Request the global lock of the MySQL before creating the snapshot volume; release the lock after the snapshot is created

two。 If it is an Innodb engine, some of it will be saved in the transaction log after flush tables, but not in the file. Therefore, transaction logs and data files are required for recovery.

However, after the lock is released, the contents of the transaction log are synchronized in the data file, so the backup content is not absolutely the content of the lock release time, because some outstanding transactions have been completed, but are rolled back in the backup data because they are not completed. So you need to go back a little bit with the help of binary logs.

Considerations for snapshot-based backups:

1. The transaction log and the data file must be in the same volume

two。 Request the global lock of the MySQL before creating the snapshot volume; release the lock after the snapshot is created

3. After the global lock request is completed, do a log scroll; do the binary log file and location mark (manually)

The brief steps for backup and recovery are as follows:

Backup

1. Request a global lock and scroll the log

Mysql > FLUSH TABLES WITH READ LOCK;mysql > FLUSH LOGS

two。 Generate binary log files and location marks (manually)

# mysql-e 'SHOW MASTER STATUS' > / path/to/orignal_volume

3. Create a snapshot volum

# lvcreate-L-s-n-p r / path/to/some_lv

4. Release the global lock

5. Mount snapshot volumes and back up

6. After the backup is complete, delete the snapshot volume

Restore:

1. Keep the binary log.

Extract all events after backup to a sql script

two。 Restore data, modify permissions and belong to master groups, etc., and start mysql

3. Do point-in-time restore

4. In a production environment, a full backup is required immediately after a large restore.

Restore the mysql instance using a snapshot volume backup:

Environment to create a myvg volume group, and mydata logical volumes are used to store mysql data and mount to / mydata/data

Backup:

1. Create an account dedicated to backup and grant permissions to FLUSH LOGS and LOCK TABLES

Mysql > GRANT RELOAD,LOCK TABLES,SUPER ON *. * TO 'lvm'@'172.16.100.%' IDENTIFIED BY' lvm';mysql > FLUSH PRIVILEGES

two。 Request a global lock, lock the database, and scroll the log

Mysql > FLUSH TABLES WITH READ LOCK;mysql > FLUSH LOGS

3. Record backup point

# mysql- ulvm-h272.16.100.7-plvm-e'SHOW MASTER STATUS' > / tmp/backp_point.txt# cat / tmp/backp_point.txtFile Position Binlog_Do_DB Binlog_Ignore_DBmysql-bin.000010 499

4. Create and mount snapshot volumes and create backup file extraction points

# lvcreate-L 100m-s-n mydata-snap-pr / dev/myvg/mydata # mount-t ext4-o ro / dev/myvg/mydata-snap / mnt/# mkdir / backups

5. Release lock

# mysql-ulvm-h272.16.100.7-plvm-e'UNLOCK TABLES';## simulated write operation mysql > UNLOCK TABLES;mysql > CREATE DATABASE samleedb

6. Copy the files in the snapshot to the backup file storage directory

# cp-ar / mnt/data/ / backups/data-2016-06-14

7. Backup completed, delete snapshot volume, reduce disk Icano

# umount / mnt/# lvremove / dev/myvg/mydata-snap

Failure simulation recovery: the database storage directory is damaged or the entire server crashes, and the data storage directory is completely deleted

1. Simulate data corruption failure:

[root@mysql] # service mysqld stopShutting down MySQL. [OK] [root@mysql ~] # rm-rf / mydata/data/* [root@mysql ~] # service mysqld startStarting MySQL....The server quit without updating PID file [failed] a/data/mysql.samlee.com.pid)-- found that mysql cannot be started at this time

two。 Copy the backed-up data files back to the source directory and start mysql

# cp-arp / backups/data-2016-06-14 * / mydata/data/# service mysql startmysql > SHOW DATABASES +-+ | Database | +-+ | information_schema | | hellodb | | mysql | | performance_schema | | samlee | | test | +-+ shows that there is no samleedb at this time, because this is created after backup Therefore, it needs to be restored through the previously recorded binary log location.

3. View the previously recorded record points. Backward reduction

# cat / tmp/backp_point.txt File Position Binlog_Do_DB Binlog_Ignore_DBmysql-bin.000010 499 # mysqlbinlog / mydata/data/mysql-bin.000010-- start-position 499 > / tmp/2016_06_14.sqlmysql > source / tmp/2016_06_14.sql;mysql > SHOW DATABASES +-+ | Database | +-+ | information_schema | | hellodb | | mysql | | performance_schema | | samlee | | test | | samleedb | +-+

Using Xtrabackup for MySQL backup

Installation:

1. Brief introduction

Xtrabackup is a mysql database backup tool provided by percona. According to officials, it is the only open source tool in the world that can hot backup innodb and xtradb databases. Features:

1) the backup process is fast and reliable

2) the backup process will not interrupt the transaction in progress

3) can save disk space and traffic based on compression and other functions

4) automatic backup verification

5) Fast reduction speed

two。 Installation and installation source acquisition

The latest version is available from the following URL:

Https://www.percona.com/downloads/XtraBackup/

Installation

# yum-y install percona-toolkit-2.2.16-1.noarch.rpm # yum-y install libev-4.15-1.el6.rf.x86_64.rpm# yum-y install percona-xtrabackup-2.3.2-1.el6.x86_64.rpm

Backup implementation:

1. Full backup:

"if you want to use a user with minimum privileges for backup, you can create such a user based on the following command:"

Usage: innobackupex-- user=DBUSER-- password=DBUSERPASS / path/to/BACKUP-DIR/--user: you need to create a user with minimum permissions: mysql > GRANT RELOAD,LOCK TABLES,REPLICATION CLIENT ON *. * TO 'xtrauser'@'localhost' IDENTIFIED BY' xtrauser';mysql > REVOKE ALL PRIVILEGES,GRANT OPTION FROM 'xtrauser';mysql > GRANT RELOAD,LOCK TABLES,REPLICATION CLIENT ON *. * TO' xtrauser'@'localhost';mysql > FLUSH PRIVILEGES

/ path/to/BACKUP_DIR: the backed-up data storage directory, plus metadata containing some xtrabackup

When using innobackupex backup, it invokes xtrabackup to back up all InnoDB tables, copies all related files (.frm) about the table structure definition, and related files for MyISAM, MERGE, CSV, and ARCHIVE tables, as well as files related to triggers and database configuration information. These files are saved to a directory with time commands.

Example: backup

# innobackupex-- user=xtrauser-- password=xtrauser / tmp/xtrabackup/160615 09:47:34 Executing UNLOCK TABLES160615 09:47:34 All tables unlocked160615 09:47:34 Backup created in directory'/ tmp/xtrabackup//2016-06-15 September 09-47-27'MySQL binlog position: filename 'mysql-bin.000011' Position '655 to 160615 09:47:34 [00] Writing backup-my.cnf160615 09:47:34 [00]... done160615 09:47:34 [00] Writing xtrabackup_info160615 09:47:34 [00]... donextrabackup: Transaction log of lsn (1604789) to (1604789) was copied.160615 09:47:35 completed OK!

When you see the last line, the backup is complete.

While backing up, innobackupex also creates the following files in the backup directory:

(1) xtrabackup_checkpoints-backup type (such as full or incremental), backup status (such as prepared status) and LSN (log serial number) scope information

# cat xtrabackup_checkpoints backup_type = full-backupedfrom_lsn = 0to_lsn = 1604789last_lsn = 1604789compact = 0recover_binlog_info = 0

Each InnoDB page (usually 16k in size) contains a log sequence number, LSN. LSN is the system version number of the entire database system, and the LSN associated with each page can show how the page has changed recently.

In mysql, the block in which the data is stored will have a sequential ID, and if a piece of data is modified, it will be given a new ID. According to these ID, you can mark the old and new degree of the data. Xtrabackup uses these ID for backups and incremental backups.

2.xtrabackup_binlog_info: the binary log file currently being used by the mysql server and the location of the binary log event at the moment of backup.

# cat xtrabackup_binlog_info mysql-bin.000011 655

3. Xtrabackup_info: contains a lot of xtrabackup tool information and database information backed up

# cat xtrabackup_info uuid = 21c7cde7-329b-11e6-b888-000c2923351bname = tool_name = innobackupextool_command =-- user=xtrauser-- password=... / tmp/xtrabackup/tool_version = 2.3.2ibbackup_version = 2.3.2server_version = 5.5.33-logstart_time = 2016-06-15 09:47:31end_time = 2016-06-15 09:47:34lock_time = 0binlog_pos = filename 'mysql-bin.000011' Position '655'innodb_from_lsn = 0innodb_to_lsn = 1604789partial = Nincremental = Nformat = filecompact = Ncompressed = Nencrypted = N

4.backup-my.cnf-configuration option information used by backup commands

# cat backup-my.cnf # This MySQL options file was generated by innobackupex.# The MySQL server[mysqld] innodb_checksum_algorithm=innodbinnodb_log_checksum_algorithm=innodbinnodb_data_file_path=ibdata1:10M:autoextendinnodb_log_files_in_group=2innodb_log_file_size=5242880innodb_fast_checksum=falseinnodb_page_size=16384innodb_log_block_size=512innodb_undo_directory=.innodb_undo_tablespaces=0

5.xtrabackup_binlog_pos_innodb-the current position of the binary log file and the binary log file for the InnoDB or XtraDB table.

When using innobackupex for backup, you can also use the-- no-timestamp option to prevent the command from automatically creating a directory named by time; in this way, the innobackupex command will create a BACKUP-DIR directory to store the backup data.

Prepare (prepare) a full backup

In general, after the backup is complete, the data cannot be used for restore operations because the backed up data may contain transactions that have not yet been committed or transactions that have been committed but have not been synchronized to the data file.

Therefore, the data file is still in an inconsistent state at this time. The main role of "preparation" is to make the data file consistent by rolling back uncommitted transactions and synchronizing committed transactions to the data file.

The-apply-log option of the innobackupex command can be used to achieve the above functions. Such as the following command: in fact, the unfinished transaction is committed, the preparation work needs to be performed before the restore, and the preparation work can be performed before.

Usage:innobackupex-- apply-log / path/to/BACKUP-DIR example: # innobackupex-- apply-log / tmp/xtrabackup/2016-06-15009-47-27/xtrabackup: starting shutdown with innodb_fast_shutdown = 1InnoDB: FTS optimize thread exiting.InnoDB: Starting shutdown...InnoDB: Shutdown completed; log sequence number 1605142160615 10:26:54 completed OK! Innobackupex can also use the-- use-memory option to specify the amount of memory it can use, usually 100m by default, when you see these lines indicating that "preparation" has been completed. If enough content is available, you can allocate more memory to the prepare process to improve its completion speed.

Restore data from a full backup

Note: there is no need to start MySQL for recovery

The-- copy-back option of the innobackupex command is used to perform the restore operation, which performs the restore process by copying all data-related files to the mysql server DATADIR directory. Innobackupex uses backup-my.cnf to get information about the DATADIR directory.

Usage:innobackupex-- copy-back / path/to/BACKUP-DIR# innobackupex-- copy-back / tmp/xtrabackup/2016-06-15 September 09-47-27 / if executed correctly, the last lines of the output information are usually as follows: 160615 10:46:34 [01]. Done160615 10:46:34 completed OK!

When the data is restored to the DATADIR directory, you also need to make sure that the owner and group of all data files are the correct users, such as mysql, otherwise, you need to modify the owner and group of the data file before starting mysqld.

# chown-R mysql:mysql / mydata/data/# service mysqld start

Using innobackupex for incremental backups

Each InnoDB page contains a LSN message, and whenever the relevant data changes, the LSN of the related page will automatically grow. This is the basis on which InnoDB tables can be backed up incrementally, that is, innobackupex is achieved by backing up pages that have changed since the last full backup.

1. Backup process:

To achieve the first incremental backup, you can use the following command:

Usage: innobackupex-incremental / backup-incremental-basedir=BASEDIRBASEDIR: refers to the directory where the full backup is located. After the execution of this command, the innobackupex command creates a new time-named directory in the / backup directory to hold all incremental backup data. In addition, when performing an incremental backup again after performing an incremental backup, its-incremental-basedir should point to the directory where the last incremental backup was located.

It is important to note that incremental backups can only be applied to InnoDB or XtraDB tables, and for MyISAM tables, incremental backups are actually performed as full backups.

Incremental backup recovery case application:

First, do a full backup, and then do two incremental backups based on the previous full backup, and create two databases between them.

Full backup:

# innobackupex-- user=xtrauser-- password=xtrauser-- no-timestamp / tmp/xtrabackup/full_backup-- create test database mysql > CREATE DATABASE samlee1

First incremental backup:

# innobackupex-user=xtrauser-password=xtrauser-incremental / tmp/xtrabackup/-incremental-basedir=/tmp/xtrabackup/full_backup/mysql > CREATE DATABASE samlee2

Second incremental backup:

# innobackupex-- user=xtrauser-- password=xtrauser-- incremental / tmp/xtrabackup/-- incremental-basedir=/tmp/xtrabackup/2016-06-15 million 13-26-08 /

two。 Preparation process

There are some differences between prepare incremental backups and collating full backups, especially the following:

1) transactions that have been committed need to be "replayed" on each backup (including full and individual incremental backups). After replay, all backup data will be merged into the full backup.

2) "rollback" uncommitted transactions based on all backups. As a result, the operation becomes: cannot be rolled back, because it is possible that the first backup was not committed and was successfully committed in increments

The method of use is as follows:

# innobackupex-apply-log-redo-only BASE-DIR

Then execute:

# innobackupex-apply-log-redo-only BASE-DIR-incremental-dir=INCREMENTAL-DIR-1

And then the second increment.

# innobackupex-apply-log-redo-only BASE-DIR-incremental-dir=INCREMENTAL-DIR-2

When such preparations are completed, all incremental backups are merged into a full backup.

Where BASE-DIR refers to the directory where the full backup is located, INCREMENTAL-DIR-1 refers to the directory of the first incremental backup, INCREMENTAL-DIR-2 refers to the directory of the second incremental backup, and so on, that is, if there are multiple incremental backups, do the same each time.

Example:

# innobackupex-- apply-log-- redo-only / tmp/xtrabackup/full_backup# innobackupex-- apply-log-- redo-only / tmp/xtrabackup/full_backup/-- incremental-dir=/tmp/xtrabackup/2016-06-15-13-26-0 innobackupex-- apply-log-- redo-only / tmp/xtrabackup/full_backup/-- incremental-dir=/tmp/xtrabackup/2016-06-15-13-28-42 /

3. Recovery process: similar to a full backup, directly copy-back the directory where the full backup is made. At this point, all the increments are exactly in the directory of the full backup.

# service mysqld stop# rm-rf / mydata/data/*# innobackupex-- copy-back / tmp/xtrabackup/full_backup/# chown-R mysql:mysql / mydata/datamysql > SHOW DATABASES +-+ | Database | +-+ | information_schema | | hellodb | | mysql | | performance_schema | | samlee | | samlee1 | | samlee2 | | test | +- -you can see that both of the databases we created earlier have been restored successfully.

Xtrabackup's "streaming" and "backup compression" functions

Xtrabackup supports the "stream" function for backup data files, that is, the backup data can be transferred to tar programs for archiving through STDOUT, instead of being saved directly to a backup directory by default. To use this feature, simply use the-stream option. Such as:

Usage: innobackupex-stream=tar / backup | gzip > / backup/ `date +% Flavor% Hume% MMI% S`.tar.gz

Example:

# innobackupex-- user=xtrauser-- password=xtrauser-- stream=tar / tmp/xtrabackup/ | gzip > / tmp/xtrabackup/ `date +% Flying% Hmuri% MMI% S`.tar.gz

You can even back up data to another server using commands like the following:

# innobackupex-- stream=tar / backup | ssh user@www.samlee.com "cat-> / backups/ `date +% qualified% Hmure% Mmure% S`.tar"

In addition, when performing a local backup, you can use the-- parallel option to copy multiple files in parallel. This option is used to specify the number of threads to start on replication. Of course, to take advantage of the convenience of this feature when actually doing backups, you also need to enable the innodb_file_per_table option or the shared tablespaces are stored in multiple ibdata files through the innodb_data_file_path option. Replication of multiple files in a database cannot take advantage of this feature. Its simple usage is as follows:

# innobackupex-- parallel / path/to/backup

At the same time, data files backed up by innobackupex can also be stored on a remote host, which can be achieved using the-- remote-host option:

# innobackupex-- remote-host=root@www.samlee.com / path/IN/REMOTE/HOST/to/backup

Import or export a single form

By default, InnoDB tables cannot be migrated between mysql servers by copying table files directly, even if the innodb_file_per_table option is used. This can be achieved using the Xtrabackup tool, but at this point the mysql server that needs to "export" the table has the innodb_file_per_table option enabled (strictly speaking, the mysql server enabled the innodb_file_per_table option before the table to be "exported" was created), and the server that "imports" the table enables both the innodb_file_per_table and innodb_expand_import options.

Before creating the database, write innodb_file_per_table=1 under the server section in the configuration file

1) Export table

The export table occurs during the prepare phase of the backup, so once the full backup is complete, you can export a table during the prepare process with the-export option:

Usage: innobackupex-apply-log-export / path/to/backup

Example:

# innobackupex-- user=xtrauser-- password=xtrauser / tmp/xtrabackup/# innobackupex-- apply-log-- export / tmp/xtrabackup/2016-06-15 tables 14-45-06 this command creates a file ending in .exp for the tablespace of each innodb table, and these files ending in .exp can be imported to other servers. # ls / tmp/xtrabackup/2016-06-15 14-45-06/hellodb/*.expclasses.exp students.exp scores.exp toc.expcoc.exp teachers.exp courses.exp

2) Import table

Use show CREATE TABLE mytable; to view the original table creation command

To import an innodb table from another server on a mysql server, you need to create a table on the current server that is consistent with the structure of the original table before you can import the table:

Example: take the students table as an example:

Mysql > SHOW CREATE TABLE hellodb.students\ Graph * 1. Row * * Table: studentsCreate Table: CREATE TABLE `students` (`StuID` int (10) unsigned NOT NULL AUTO_INCREMENT, `Name` varchar (50) NOT NULL, `Age`Get` tinyint (3) unsigned NOT NULL, `Gender` enum ('Fidd` Magazine M') NOT NULL `ClassID` tinyint (3) unsigned DEFAULT NULL, `TeacherID` int (10) unsigned DEFAULT NULL, PRIMARY KEY (`StuID`) ENGINE=InnoDB AUTO_INCREMENT=26 DEFAULT CHARSET=utf81 row in set (0.02 sec) create this table in the samlee library: mysql > CREATE TABLE `students` (- > `StuID` int (10) unsigned NOT NULL AUTO_INCREMENT,-> `Name` varchar (50) NOT NULL,-> `Age` tinyint (3) unsigned NOT NULL,-> `Gender` enum -> `ClassID` tinyint (3) unsigned DEFAULT NULL,-> `TeacherID` int (10) unsigned DEFAULT NULL,-> PRIMARY KEY (`StuID`)->) ENGINE=InnoDB AUTO_INCREMENT=26 DEFAULT CHARSET=utf8 Then delete the table space of this table: mysql > ALTER TABLE samlee.students DISCARD TABLESPACE Next, copy the students.ibd and students.exp files of the students table from the server from the "export" table to the data directory of the current server, and then "import" them with the following command: # # Note permissions # cp / tmp/xtrabackup/2016-06-15 imports 14-45-06/hellodb/students {.ibd, .exp} / mydata/data/samlee/# chown mysql.mysql / mydata/data/samlee/students.*mysql > ALTER TABLE samlee.students IMPORT TABLESPACE

Backup Note:

1. Put data and backup on different disk devices; off-site or off-site backup storage is ideal

two。 The backed-up data should be restored and tested periodically

3. A full backup should be made immediately after each disaster recovery.

4. It is necessary to customize the backup strategy for different sizes or levels of data.

5. Binary logs should be on different disks from data files, and binary log files should be backed up periodically.

You should follow the steps to restore from a backup:

1. Stop the MySQL server

two。 Record server configuration and file permissions

3. Move data from the backup to the MySQL data directory; how it is executed depends on the tool

4. Change configuration and file permissions

5. Restart the server in restricted access mode; the-skip-networking option of mysqld skips network functionality

Method: edit the my.cnf configuration file and add the following items:

Skip-networking

Socket=/tmp/mysql-recovery.sock

6. Load a logical backup (if any); then check and replay the binary log

7. Check the data that has been restored

8. Restart the server in full access mode

Comment on the options you added earlier in my.cnf, and restart

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report