Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL disaster recovery and backup (part two)

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Xiaorenwutest.blog.51cto.com

MySQL disaster recovery and backup (part two)

In the last explanation, we learned that the database replies to the data in the database through the combination of mysqldump tool and binlog log. But there is one drawback. That is, the mysqldump tool works well in the case of a small amount of data, what to do when the amount of data is large? It is believed that few DBA in the company will use mysqldump tools to recover data.

Another question is, how do we back up when we have backup tools? Do you have to manually back up again and again? What should I do if I back up at night? Do you want to go to the company at night to prepare a copy and go home to have a rest? This is not very realistic.

Today we take these two problems as the theme to discuss, and give the solution. Let's move on to today's case.

1) periodic backup scheme:

For example, the company requires DBA to back up the entire database at 1: 00 a. M. on Sunday.

Incremental backup every 4 hours from Monday to Saturday morning

We can use crond to set the task schedule for backup plan.

# crontab-e

Backup schedule for Sunday:

0 1 * * 0 / root/mysqlfullbackup.sh > / dev/null 2 > & 1

# incremental backup every 4 hours from Monday to Saturday

0 * / 4 * * 1-6 / root/mysqldailybackup.sh > / dev/null 2 > & 1

The content of the script and its writing:

(1) first of all, full library backup {that is, Sunday backup}

It is mentioned above that the script name is: mysqlfullbackup.sh

#! / bin/bash

# define database directory

MysqlDir=/usr/local/mysql

# define users and passwords

User=root

Userpwd=123456

Dbname=test_db

# define backup directory

Databackupdir=/opt/mysqlbackup

# determine whether the previous statement exists, and create it if it does not exist

[!-d $databackupdir] & & mkdir $databackupdir

# define mail files

Emailfile=$databackupdir/email.txt

# the pickup address is local

Email=root@localhost.localdomain

# define backup log files

Logfile=$databackupdir/mysqlbackup.log

# define time

DATE=$ (date-I)

Echo "" > $emailfile

Echo $(date + "% y-%m-%d% H:%M:%S") > > $emailfile

Cd $databackupdir

# define backup file name

Dumpfile=mysql_$DATE.sql

Gzdumpfile=mysql_$DATE.sql.tar.gz

# back up the database using the mysqldump tool

$mysqlDir/bin/mysqldump-u$user-p$userpwd-- flush-logs-x $dbname > $dumpfile

# compress backup files

If [$?-eq 0]; then

Tar zcf $gzdumpfile $dumpfile > > $emailfile 2 > & 1

Echo "BackupFileName:$gzdumpfile" > > $emailfile

Echo "DataBase Backup Success!" > > $emailfile

Rm-f $dumpfile

Else

Echo "DataBackup Fail!" > > $emailfile

Fi

# start writing to the log file

Echo "-" > > $logfile

Cat $emailfile > > $logfile

# send email notification

Cat $emailfile | mail-s "Mysql Backup" $email

(2): another script file

Mysqldailybackup.sh {belongs to backup every 4 hours from Monday to Saturday}

#! / bin/bash

# define database directory

MysqlDir=/usr/local/mysql

# define users and passwords

User=root

Userpwd=123456

Dbname=test_db

# define backup directory

Databackupdir=/opt/mysqlbackup

# determine whether the previous statement exists, and create it if it does not exist

[!-d $databackupdir] & & mkdir $databackupdir

# define mail files

Emailfile=$databackupdir/email.txt

# the pickup address is local

Email=root@localhost.localdomain

# define backup log files

Logfile=$databackupdir/mysqlbackup.log

# define time

DATE=$ (date-I)

Echo "" > $emailfile

Echo $(date + "% y-%m-%d% H:%M:%S") > > $emailfile

Cd $databackupdir

# define backup file name

Dumpfile=mysql_$DATE.sql

Gzdumpfile=mysql_$DATE.sql.tar.gz

# back up the database using the mysqldump tool

$mysqlDir/bin/mysqldump-u$user-p$userpwd-- flush-logs-x $dbname > $dumpfile

# compress backup files

If [$?-eq 0]; then

Tar zcf $gzdumpfile $dumpfile > > $emailfile 2 > & 1

Echo "BackupFileName:$gzdumpfile" > > $emailfile

Echo "DataBase Backup Success!" > > $emailfile

Rm-f $dumpfile

Else

Echo "DataBackup Fail!" > > $emailfile

Fi

# start writing to the log file

Echo "-" > > $logfile

Cat $emailfile > > $logfile

# send email notification

Cat $emailfile | mail-s "Mysql Backup" $email

Summary: through the above two scripting cases to make the backup of the database, and is periodic without manual operation, automatic execution.

Advantages: 1) reduce the workload of operation and maintenance staff or DBA

2) less number of backup errors

3) High efficiency and high speed

2): we just introduced the combination of periodic backup and task scheduling for the database; then let's discuss the second question of today: {backup tool}

The Xtrabackup implementation is a physical backup and a physical hot backup

At present, there are two mainstream tools to implement physical hot backup: ibbackup and xtrabackup;ibbackup are commercial software that require authorization and are very expensive. Xtrabackup is more powerful than ibbackup, but it is open source. So we are here to introduce the use of xtrabackup.

Xtrabackup provides two command-line tools:

Xtrabackup: dedicated to backing up data from InnoDB and XtraDB engines

Innobackupex: this is a perl script that invokes the xtrabackup command during execution so that you can back up both InnoDB and MyISAM engine objects.

Note: don't get confused: xtrabackup: used to back up innodb and xtradb

Innobackupex: for backing up innodb and myisam

Advantages:

Xtrabackup is a mysql database backup tool provided by percona with the following features:

(1) the backup process is fast and reliable

(2) the backup process will not interrupt the transaction in progress.

(3) it can save disk space and traffic based on compression and other functions.

(4) automatic backup verification

(5) the reduction speed is fast.

If you are interested, you can download it on the official website:

Http://www.percona.com/software/percona-xtrabackup; you can download the source code compilation and installation, download the appropriate RPM package, install it using yum, or download the binary source package.

1) first extract the downloaded software package:

# tar zxf percona-xtrabackup-2.4.4-Linux-x86_64.tar.gz

2) enter the decompression directory

# cd percona-xtrabackup-2.4.4-Linux-x86_64/

3) copy all programs under bin to / usr/bin

[root@localhost percona-xtrabackup-2.4.4-Linux-x86_64] # cp bin/* / usr/bin/

There are two main tools in Xtrabackup:

Xtrabackup: a tool for hot backup of data in innodb and xtradb tables. It supports online hot backup. Innodb tables can be backed up without locking, but this tool cannot operate Myisam engine tables.

Innobackupex: a perl script that encapsulates xtrabackup and can handle both Innodb and Myisam, but requires a read lock when dealing with Myisam.

Because read locks are required to operate on Myisam, which blocks writes to online services, and Innodb does not have such restrictions, the greater the proportion of Innodb table types in the database, the more advantageous it is.

4) install the relevant plug-ins

# yum install perl-DBI perl-DBD-MySQL perl-Time-HiRes perl-IO-Socket-SSL perl-TermReadKey.x86_64 perl-Digest-MD5-y

5) download percona-toolkit and install

# wget https://www.percona.com/downloads/percona-toolkit/2.2.19/RPM/percona-toolkit-2.2.19-1.noarch.rpm

# yum-y localinstall percona-toolkit-2.2.19-1.noarch.rpm

The above belongs to the preparatory work, pack the tools you need, and let's get to the topic:

Let's go

Option 1: xtrabackup full backup + binlog incremental backup

1. Backup

Create a backup directory

# mkdir-p / opt/mysqlbackup/ {full,inc}

Full: directory for full storage; inc: directory for incremental backups

1) full backup

Basic syntax: # innobackupex-- user=DBUSER-- password=DBUSERPASS / path/to/BACKUP-DIR/

Execute the following command to make a full backup:

# innobackupex-user=root-password=123456 / opt/mysqlbackup/full

Note:-- defaults-file=/etc/my.cnf specifies the configuration file my.cfg for mysql, which must be the first parameter if specified.

/ path/to/BACKUP-DIR/ specifies the destination directory where the backup is stored, and the backup process creates a directory named after the backup time to store the backup files.

The following prompt appears. Indicates success

Backed up files:

At the same time of the backup, the backup data creates a directory under the backup directory with the name of the current date and time to store the backup files:

Everyone knows the database; let's explain every other file.

(1) xtrabackup_checkpoints-backup type (such as full or incremental), backup status (such as prepared status) and LSN (log serial number) scope information

Each InnoDB page (usually 16k in size) contains a log sequence number, LSN. LSN is the system version number of the entire database system, and the LSN associated with each page can indicate how the page has changed recently.

(2) xtrabackup_binlog_info-the location of the binary log files currently in use by the mysql server and the binary log events up to the moment of backup.

(3) xtrabackup_binlog_pos_innodb-the current position of the binary log file and the binary log file used for the InnoDB or XtraDB table.

(4) xtrabackup_binary-the executable file of xtrabackup used in backup

(5) backup-my.cnf-configuration option information used for backup commands

When using innobackupex for backup, you can also use the-- no-timestamp option to prevent the command from automatically creating a directory named by time; in this way, the innobackupex command will create a BACKUP-DIR directory to store backup data

Note: the relevant options describe:

Where-- user specifies the user name to connect to the database,-- password specifies the password to connect to the database,-- defaults-file specifies the configuration file for the database, from which innobackupex needs to obtain information such as datadir. -- database specifies the database to be backed up. The database specified here is only valid for MyISAM tables and is complete for InnoDB data (InnoDB data in all databases are backed up, not only the specified database is backed up, but also during recovery). / opt/mysqlbackup/full is the location where the backup files are stored.

Now that the database is a little empty and the amount of data is not enough, let's insert the point data and make our incremental backup.

After the insertion is complete, let's verify that the inserted data is successful:

2) incremental backup binaries:

First check the location of the last backup of the binary

# mysqlbinlog-- start-position=154 / usr/local/mysql/data/mysql_bin.000001 > / opt/mysqlbackup/inc/ `date +% F`.sql

Let's see if the incremental backup is successful.

You can see that both incremental and full backups are backed up in the form of dates.

2) next, let's restore them to see if the backup tool works and whether it is as magical as it is said; let's go look down

Simulate database corruption: {here we kill the data directory file directly} "

Rm-rf / usr/local/mysql/data/*

This shows that the data directory of mysql has been deleted successfully, and then let's verify whether the contents of the previous backup can really reply to the previously deleted content.

To interrupt here, generally, the data cannot be used to reply to the data after the backup, because the backup data may contain things that have not yet been committed or transactions that have been committed but have not been synchronized to the data file. Therefore, data processing may lead to data inconsistency.

The-- apply-log option of the innobakupex command can be used to achieve the above functions

So let's take a look at whether the data has really been successfully replied.

Why haven't you recovered yet?

That is only a full backup, not a data recovery, and then you have to restore the sql syntax of the database: add the option-- copy-back

Let's check again to see if the data comes back:

From the actual results, we can see that the database and other files have been restored; but there are still some defects, that is, the current users are root rather than the previous MySQL users, so we also need to adjust the owner: genus group to mysql.

Restart the mysql service

If you enter the database, you can see that the previous full backup has been successfully restored, but the tables and contents created later have not been restored, so you need to restore the incremental backup.

Master: in order to prevent a large number of binaries from being generated during the restore, you can temporarily close the binaries and open them again after the restore is successful.

Mysql > set sql_log_bin=0;= closes binaries

How to restore incremental files: or through the mysqlbinlog tool

Mysql > source / opt/mysqlbackup/inc/2017-06-28.sql

Open the binary file again when you are finished

Let's check again whether the previous tables and data have come back.

Incremental backup has successfully restored the previous data; this backup and restore is successful!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report