Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Several methods of mysql backup and restore that must be known by DBA

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Blogger QQ:819594300

Blog address: http://zpf666.blog.51cto.com/

Friends who have any questions can contact the blogger, the blogger will help you answer, thank you for your support! 1. Mysqldump backup combined with binlog log recovery

Note: MySQL backup generally takes the way of full database backup and log backup, such as full backup once a day and binary log backup every hour. This allows you to use full and log backups to restore data anywhere or at any time before the last binary log backup after a MySQL failure.

1. Binlog introduction

1) the log records all the operation logs of additions, deletions and changes in the database, as well as the execution time of these operations.

The Binlog function is off by default and not enabled.

Check binlog and use mysqlbinlog-v mysql-bin.000001

The purpose of Binlog: 1: master-slave synchronization 2: restore database

Enable binary log function: binary logs can be enabled by editing the log-bin option in my.cnf; form: right: log-bin [= DIR/ [filename]]. Note: every time you restart the mysql service or run mysql > flush logs;, a new binary log file will be generated. The number of these log files will continue to increase, and a file called filename.index will be generated in addition to the above files. The list of all binary log files stored in this file is also known as the index of binary files.

2) View the generated binary log Note: the purpose of viewing binlog content is to restore data

Description: because bin-log is a binary file, it cannot be opened and viewed directly through the file contents view command. Mysql provides two ways to view it.

Before the introduction of ①, let's add, delete and modify the database, otherwise the data in log will be a little empty.

② restarts a new log file

③ views binary logs on MySQL Server

View events in the specified binary log:

This command also contains other options for flexible viewing:

Summary: in the above way, you can view the binary log files and events in the files on the server, but if you want to see the specific contents of the files and restore the scenario, you need to use the tool mysqlbinlog.

Syntax format: mysqlbinlog [options] log_file.

The output varies slightly depending on the format of the log file and the options used by the mysqlbinlog tool.

The available options for mysqlbinlog can be found in the man manual.

Description: whether it is a local binary log file or a remote server binary log file, whether it is a line mode, statement mode or mixed mode binary log file, it can be directly applied with MySQL Server for point-in-time, location or database-based recovery after being parsed by the mysqlbinlog tool.

Let's demonstrate how to use binlog to delete data before recovery (the id=2 record).

Note: in the actual production environment, if you need to restore the database, do not allow users to access the database to avoid the insertion of new data, and in the master-slave environment, turn off the master-slave.

① looks at the binlog file to find the delete from bdqn.test where id=2

# cd/usr/local/mysql/data/

# mysqlbinlog-v mysql-bin.000002

The display results are as follows:

If you can't see the picture clearly, you can see the copied log below:

# at 219

# 170316 21:52:28 server id 1 end_log_pos 287 CRC32 0xff83a85b Query thread_id=2 exec_time=0 error_code=0

SET timestamp 1489672348

SET @ @ session.pseudoplastics thread readreadable cards idler / session.

SET @ @ session.foreign_key_checks=1,@@session.sql_auto_is_null=0, @ @ session.

SET @ @ session.sqlcards modewords 1075838976 Universe

SET @ @ session.auto_increment_increment=1, @ @ session.

/ *!\ C utf8 * /! * /

Set session. Session. Session setting clientbook 33 minutes. Session. Collationalization connection33 camera 33 session session. Collationalization serverware 33 minutes.

SET @ @ session. Session. LCC timetables namespace.

SET @ @ session.collationalization databases

BEGIN

/ *! * /

# at 287

# 170316 21:52:28 server id 1 end_log_pos 337 CRC32 0x343e7343 Table_map: `bdqn`.`test` mapped to number

# at 337

# 170316 21:52:28 server id 1 end_log_pos 382 CRC32 0xa3d1ce0d Delete_rows: table id 108 flags: STMT_END_F

BINLOG'

NJjKWBMBAAAAMgAAAFEBAAAAAGwAAAAAAAEABGJkcW4ABHRlc3QAAgMPAjwAAkNzPjQ=

NJjKWCABAAAALQAAAH4BAAAAAGwAAAAAAAEAAgAC//wCAAAABGxpc2kNztGj

'/ *! * /

# DELETE FROM `bdqn`.`test`

# WHERE

# @ 1room2

# @ 2 thanks Lisim

# at 382

# 170316 21:52:28 server id 1 end_log_pos 413 CRC32 0x257e7073 Xid = 10

Com _ MIT _ blank /

Description: you can see from the above figure that the delete time occurrence position is 287 and the event end position is 413.

② recovery process: directly use the bin-log log to restore the database to the deletion location 287, then skip the point of failure, and then restore all of the following operations, as follows

Since you haven't done a full library backup before, you need to use all binlog logs to recover, so it takes a long time to restore and export the relevant binlog files in the production environment.

③ deletes the bdqn database (turn off the binlog feature before deleting bdqn and restoring data)

④ uses binlog to recover data

After the ⑤ recovery is complete, we check whether the data in the following table is complete

2. Mysqldump introduction

Function: mysqldump is a backup and data transfer tool that comes with mysql.

Features: it only generates sql statements (that is, sql commands) encapsulated in the file, not the real data.

Mysqldump is a logical backup, not a physical backup, backing up SQL statements, not data files.

Mysqldump is suitable for small databases, and the data capacity is generally in a few gigabytes. When the amount of data is large, mysqldump is not recommended.

Export objects: can be for a single table, multiple tables, a single database, multiple databases, all databases.

Format:

# mysqldump [option] Library name [Table name 1] [Table name 2]... > / backup path / backup file name

/ / Export single or multiple tables in the specified database

# mysqldump [option]-- databases library name 1 [library name 2]... > / backup path / backup file name

/ / Export the specified database or multiple databases

# mysqldump [options]-- all-databases > / backup path / backup file name

/ / Export all databases

# mysqldump-uroot-p123456-- flush-logs bdqn > / opt/bdqn.sql

/ / Export database bdqn, where the option "- flush-logs" is to open a new binlog after a full backup is completed.

# mysql-uroot-p123456 bdqn

< /opt/bdqn.sql //从备份文件导入数据库bdqn 下面用一个具体的实验说明用mysqldump实现全库备份+binlog的数据恢复 1)开启binlog功能并重启服务 2)创建备份目录 3)创建实验数据 4)开始全库备份(注意:全库备份不会备份binlog日志文件) 5)备份mysqldump全库备份之前的所有的binlog日志文件(注意:真是生产环境中可能不止一个binlog文件) 6)因为全库备份之前的binlog已经备份了,现在就删除它们(即新产生的binlog之前的所有的binlog删除) 7)模拟误操作,删除了数据,并且新增加了新的数据 8)备份自mysqldump之后的binlog日志文件 9)使用mysqldump的全库备份+binlog来恢复数据 ①使用mysqldump的备份进行全库恢复(即恢复到全部备份时候的所有数据) ②分析新开启的binlog日志文件(我这里是mysql-bin.000002)里面误操作的事件的起始位置和终止位置,只要跳过这一段事件即可 图片看不清楚的可以看下面复制的日志: # at 219 #170318 21:14:42 server id 1 end_log_pos 291 CRC32 0xddbf8eff Query thread_id=5 exec_time=0 error_code=0 SET TIMESTAMP=1489842882/*!*/; SET @@session.pseudo_thread_id=5/*!*/; SET @@session.foreign_key_checks=1,@@session.sql_auto_is_null=0, @@session.unique_checks=1,@@session.autocommit=1/*!*/; SET @@session.sql_mode=1075838976/*!*/; SET @@session.auto_increment_increment=1,@@session.auto_increment_offset=1/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=33/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 291 #170318 21:14:42 server id 1 end_log_pos 339 CRC32 0x4a9ec8f2 Table_map: `bdqn`.`it` mapped to number 108 # at 339 #170318 21:14:42 server id 1 end_log_pos 388 CRC32 0x2e8a3da8 Delete_rows: table id 108 flags: STMT_END_F BINLOG ' wjLNWBMBAAAAMAAAAFMBAAAAAGwAAAAAAAEABGJkcW4AAml0AAIDDwI8AALyyJ5K wjLNWCABAAAAMQAAAIQBAAAAAGwAAAAAAAEAAgAC//wBAAAACHpoYW5nc2FuqD2KLg== '/*!*/; ### DELETE FROM `bdqn`.`it` ### WHERE ### @1=1 ### @2='zhangsan' # at 388 #170318 21:14:42 server id 1 end_log_pos 419 CRC32 0xa1c06a4f Xid = 43 COMMIT/*!*/; ③开始使用全库备份后的增量备份的binlog日志文件备份文件进行对全库恢复后的增量数据的恢复 10)查看恢复结果 总结:从上图显示可以看出数据恢复到正常状态,实际生产环境中mysql数据库的备份是周期性重复操作,所有通常是要编写脚本实现,通过crond计划任务周期性执行备份脚本。 通过crontad计划任务周期性执行备份脚本 1)制定mysqldump备份方案 周日凌晨1点全库备份; 周一到周六凌晨每隔4个小时增量备份一次 设置crontab任务,每天执行备份脚本: 2)编写mysqlfullbackup.sh脚本(即mysql全库备份脚本) 图片看不清楚的可以看下面复制的脚本原文件: #!/bin/bash #Name:mysqlFullBackup.sh #定义数据库目录 mysqlDir=/usr/local/mysql #定义用于备份数据库的用户名和密码 user=root userpwd=123456 dbname=bdqn #定义备份目录 databackupdir=/opt/mysqlbackup [ ! -d $databackupdir ] && mkdir $databackupdir #定义邮件正文文件 emailfile=$databackupdir/email.txt #定义邮件地址 email=root@localhost.localdomain #定义备份日志文件 logfile=$databackupdir/mysqlbackup.log DATE=`date -I` echo "" >

$emailfile

Echo $(date + "% Y-%m-%d% H:%M:%S") > > $emailfile

Cd $databackupdir

# define backup file name

Dumpfile=mysql_$DATE.sql

Gzdumpfile=mysql_$DATE.sql.tar.gz

# use mysqldump to back up the database. Please set the parameters according to the specific situation

$mysqlDir/bin/mysqldump-u$user-p$userpwd-- flush-logs-x $dbname > $dumpfile

# compress backup files

If [$?-eq 0]; then

Tar zcvf$gzdumpfile $dumpfile > > $emailfile 2 > & 1

Echo "BackupFileName:$gzdumpfile" > > $emailfile

Echo "DataBase Backup Success!" > > $emailfile

Rm-rf$dumpfile

Else

Echo "DataBase Backup Fail!" > > $emailfile

Fi

# write log files

Echo "- -" > > $logfile

Cat $emailfile > > $logfile

# send email notification

Cat $emailfile | mail-s "MySQL Backup" $email

2) write mysqldailybackup.sh scripts (that is, mysql incremental backup scripts)

If you can't see the picture clearly, you can take a look at the original script copied below:

#! / bin/bash

# Name:mysqlDailyBackup.sh

# define database directory and data directory

Mysqldir=/usr/local/mysql

Datadir=$mysqldir/data

# define the user name and password used to back up the database

User=root

Userpwd=123456

# define backup directory and backup files daily to $databackupdir/daily

Databackupdir=/opt/mysqlbackup

Dailybackupdir=$databackupdir/daily

# define message body file

Emailfile=$databackupdir/email.txt

# define email address

Email=root@localhost.localdomain

# define log files

Logfile=$databackupdir/mysqlbackup.log

Echo "" > $emailfile

Echo $(date + "% Y-%m-%d% H:%M:%S") > > $emailfile

# refresh the log so that the database uses the new binary log file

$mysqldir/bin/mysqladmin-u$user-p$userpwd-flush-logs

Cd $datadir

# get the binary log list

Filelist= `cat mysql- bin.index`

Icounter=0

For file in $filelist

Do

Icounter= `exper $icounter + 1`

Done

Nextnum=0

Ifile=0

For file in $filelist

Do

Binlogname= `basename $file`

Nextnum= `expr $nextnum + 1`

# skip the last binary log (the binary log file currently used by the database)

If [$nextnum-eq $icounter]; then

Echo "Skiplastest!" > / dev/null

Else

Dest=$dailybackupdir/$binlogname

# Skip binary log files that have been backed up

If [- e $dest]; then

Echo "Skipexist $binlogname!" > / dev/null

Else

# backup log files to backup directory

Cp $binlogname $dailybackupdir

If [$?-eq 0]; then

Ifile= `expr $ifile + 1`

Echo "$binlogname backup success!" > > $emailfile

Fi

Fi

Fi

Done

If [$ifile-eq 0]; then

Echo "NoBinlog Backup!" > > $emailfile

Else

Echo "Backup $ifile File (s)." > > $emailfile

Echo "Backup MySQL Binlog OK!" > > $emailfile

Fi

# send email notification

Cat $emailfile | mail-s "MySQL Backup" $email

# write log files

Echo "--> > $logfile

Cat $emailfile > > $logfile

Send mail test:

Install libmysqlclient.so.18

Test again:

Second, use xtrabackup for MySQL database backup

As described earlier, mysqldump backup uses logical backup, and its biggest disadvantage is that the backup and recovery speed is slow. For a database less than 50G, this speed is acceptable, but if the database is very large, it is not suitable to use mysqldump backup.

At this time, you need an easy-to-use and efficient tool, and xtrabackup is one of them, known as the free version of InnoDB HotBackup.

The Xtrabackup implementation is a physical backup and is a physical hot backup.

At present, there are two mainstream tools to implement physical hot backup: ibbackup and xtrabackup;ibbackup are commercial software that require authorization and are very expensive. Xtrabackup is more powerful than ibbackup, but it is open source. So we are here to introduce the use of xtrabackup.

Xtrabackup provides two command-line tools:

Xtrabackup: dedicated to backing up data from InnoDB and XtraDB engines

Innobackupex: this is a perl script that invokes the xtrabackup command during execution so that you can back up both InnoDB and MyISAM engine objects.

Xtrabackup is a mysql database backup tool provided by percona with the following features:

(1) the backup process is fast and reliable

(2) the backup process will not interrupt the transaction in progress.

(3) it can save disk space and traffic based on compression and other functions.

(4) automatic backup verification

(5) the reduction speed is fast.

Official link address: http://www.percona.com/software/percona-xtrabackup; you can download the source code compilation and installation, or you can download the appropriate RPM package or install it using yum or download the binary source package.

Install xtrabackup

1) download xtrabackup

Wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.4/binary/tarball/percona-xtrabackup-2.4.4-Linux-x86_64.tar.gz

2) decompression

3) enter the decompression directory

4) copy all programs under bin to / usr/bin

Description: there are two main tools in Xtrabackup:

Xtrabackup: a tool for hot backup of data in innodb,xtradb tables. It supports online hot backup. Innodb tables can be backed up without locks, but this tool cannot operate Myisam engine tables.

Innobackupex: a perl script that encapsulates xtrabackup and can handle both Innodb and Myisam, but requires a read lock when dealing with Myisam.

Because read locks are required to operate on Myisam, which blocks writes to online services, and Innodb does not have such restrictions, the greater the proportion of Innodb table types in the database, the more advantageous it is.

5) install relevant plug-ins

6) download percona-toolkit and install

# wget https://www.percona.com/downloads/percona-toolkit/2.2.19/RPM/percona-toolkit-2.2.19-1.noarch.rpm

At this point, the installation of xtrabackup is complete, and the backup can be started.

Solution: xtrabackup full backup + binlog incremental backup

1) enable binlog function and restart mysqld service

2) create a directory for backup (full: full storage directory; inc: incremental backup directory)

3) create experimental database, table, and add experimental data

4) start full backup

5) We can take a look at the backed-up files

Note: 1) when using innobackupex for backup, you can also use the-- no-timestamp option to prevent the command from automatically creating a directory named by time; in this way, the innobackupex command will create a BACKUP-DIR directory to store backup data.

2) you can also add the-database option to specify the database to be backed up. The database specified here is only valid for MyISAM tables and is complete for InnoDB data (all InnoDB data in all databases are backed up, not only the specified database is backed up, but also during recovery).

Description of each file in it:

Let's take a look at these documents:

Note: xtrabackup_binlog_pos_innodb and xtrabackup_binary are not available in this version, because the version is newer, these two files have been deleted in the new version, and only exist in the old version.

6) at this point, the full backup is successful, and then we open a new binlog log file and insert several pieces of data into a mysql library.

7) simulate misoperation, delete one piece of data and insert two new pieces of data at the same time

8) start incremental backup of binlog log files

9) start restoring the database

① simulates database corruption

Here I directly use delete data catalog files to simulate corruption.

② then first restore a full backup and prepare a full backup

Note 1: in general, after the backup is complete, the data cannot be used for the restore operation because the backed up data may contain transactions that have not yet been committed or transactions that have been committed but have not been synchronized to the data file. Therefore, the data file is still in an inconsistent state at this time. The main function of "preparation" is to make the data file consistent by rolling back the uncommitted transaction and synchronizing the committed transaction to the data file.

Note 2: at the end of the prepare process, the InnoDB table data has been rolled forward to the end of the entire backup, rather than to the point where the xtrabackup started.

The-- apply-log option of the innobakupex command can be used to achieve the above functionality. Such as the following command:

-- apply-log indicates that the log is applied to the data file, and the data in the backup file is restored to the database after completion:

Note 3: in the process of implementing "preparation", innobackupex can also use the-- use-memory option to specify the amount of memory it can use, which is usually 100m by default. If enough memory is available, you can allocate more memory to the prepare process to improve its completion speed.

③ officially starts restoring fully backed up databases

The-- copy-back option of the 1:innobackupex command is used to perform the restore operation, which performs the restore process by copying all data-related files to the mysql server DATADIR directory. Innobackupex uses backup-my.cnf to get information about the DATADIR directory.

④ changes the subordinate group and subordinate group of the data directory to mysql:mysql, and restarts the mysqld service.

⑤ validates restored data

⑥ begins to restore incremental backups, but before that, in order to prevent a large number of binary logs from being generated during the restore, the binary logs can be temporarily closed and then restored during the restore.

If the ⑦ has misoperation, go to the binlog backup file to delete the misoperation event before starting to restore the incremental backup.

⑧ officially starts restoring incremental backups

⑨ restarts the binary log and verifies the restored data

Attached: Xtrabackup's "streaming" and "backup compression" functions

Function: Xtrabackup supports "stream" function for backup data files, that is, you can transfer the backup data to tar program for archiving through STDOUT, instead of saving it directly to a backup directory by default.

To use this feature, you only need to use the-- stream option. Such as:

If you can't see the screenshot clearly, you can see the copy and paste command below:

# innobackupex-user=root-password= "123456"-stream=tar / opt/mysqlbackup/full/ | gzip > / opt/mysqlbackup/full/full_ `date+%F_% H% M% S`.tar.gz

(add that streaming and backup compression functions are basically used in real production environments, because this can greatly save space.)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report