In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to backup and restore big data in MySQL". In the daily operation, I believe that many people have doubts about how to backup and restore big data in MySQL. The editor consulted all kinds of materials and sorted out simple and useful methods of operation. I hope it will be helpful to answer the doubt of "how to backup and restore big data in MySQL". Next, please follow the editor to study!
It is always difficult to back up and restore a large amount of data. When MySQL is over 10G, it is slower to export with mysqldump. Xtrabackup is recommended here, which is much faster than mysqldump.
This article also refers to: http://blog.chinaunix.net/uid-20682026-id-3319204.html
Xtrabackup backup recovery test
1. Introduction of Xtrabackup
1. What is Xtrabackup
Xtrabackup is a data backup tool for InnoDB, supports online hot backup (backup does not affect data read and write), and is a good substitute for commercial backup tool InnoDB Hotbackup.
Xtrabackup has two main tools: xtrabackup and innobackupex
1. Xtrabackup can only back up InnoDB and XtraDB data tables, but not MyISAM data tables.
2. Innobackupex is modified by referring to InnoDBHotbackup's innoback script. Innobackupex is a perl script encapsulation that encapsulates xtrabackup. It is mainly for the convenience of backing up the tables of the InnoDB and MyISAM engines at the same time, but you need to add a read lock when processing myisam. And added some options to use. For example, slave-info can record some information needed by slave after backup and recovery. According to this information, it is convenient to use backup to redo slave.
2. What can Xtrabackup do:
Online (hot) backup of the InnoDB and XtraDB tables of the entire library
Make an incremental backup (innodb only) based on the last full library backup of xtrabackup
Generate a backup as a stream, which can be saved directly to a remote machine (useful when the local hard disk is out of space)
The tools provided by the MySQL database itself do not support true incremental backups, and binary log recovery is a point-in-time (point-in-time) recovery rather than an incremental backup.
The Xtrabackup tool supports incremental backups of the InnoDB storage engine, which works as follows:
(1) first complete a full backup and record the LSN (Log Sequence Number) of the checkpoint at this time.
(2) during the process incremental backup, compare whether the LSN of each page in the tablespace is greater than the LSN of the last backup, and if so, back up the page and record the LSN of the current checkpoint.
First, find and record the last checkpoint ("last checkpoint LSN") in logfile, then start copying the logfile of InnoDB to xtrabackup_logfile; from the location of InnoDB, and then start copying all the data files .ibd; don't stop copying logfile until you have finished copying all data files.
Because all data modifications are recorded in logfile, even if the data file is modified during backup, you can still maintain data consistency by parsing xtrabackup_logfile during recovery.
Because innobackupex supports innodb,myisam, this article talks about how to use innobackupex.
Second, install xtrabackup
1. Download address
Http://www.percona.com/downloads/XtraBackup/
2. Installation
According to the demand, choose a different version, I choose the rpm installation package, if the following error is reported
The code is as follows:
[root@localhost xtrabackup] # rpm-ivhpercona-xtrabackup-2.2.4-5004.el6.x86_64.rpm
Warning: percona-xtrabackup-2.2.4-5004.el6.x86_64.rpm: Header V4 DSA/SHA1Signature, key ID cd2efd2a: NOKEY
Error: Failed dependencies:
Perl (Time::HiRes) is needed by percona-xtrabackup-2.2.4-5004.el6.x86_64
Solution:
The code is as follows:
[root@localhost xtrabackup] # yum-y install perl perl-devellibaio libaio-devel perl-Time-HiRes perl-DBD-MySQL / / install dependency package
[root@localhost xtrabackup] # rpm-ivhpercona-xtrabackup-2.2.4-5004.el6.x86_64.rpm / / reinstall
Warning: percona-xtrabackup-2.2.4-5004.el6.x86_64.rpm: Header V4 DSA/SHA1Signature, key ID cd2efd2a: NOKEY
Preparing... # [100%]
1:percona-xtrabackup # # [100%]
Note:
Xtrabackup 2.2.4 does not support mysql 5.1.73, so if you want to use the new xtrabackup, use a higher version of the mysql database, otherwise the following error will be reported:
Innobackupex: Error: Unsupported server version: '5.1.73'
It is convenient to choose a compiled binary format when downloading. If you decompress it and use it directly, you need to do the following:
Tar zxvf xtrabackup-1.6.tar.gz
Cp / xtrabackup-1.6/bin/innobackupex / usr/bin/innobackupex
Cp / xtrabackup-1.6/bin/xtrabackup / usr/bin/xtrabackup
Cp/xtrabackup-1.6/bin/xtrabackup_51 / usr/bin/xtrabackup_51
It should be noted here that 51 in xtrabackup_51 refers to version 5.1 of mysql, which copies different commands according to different versions of mysql.
2. Modify my.cnf
Check to see if there is any datadir under mysqld, if it is not added
The code is as follows:
[mysqld]
Datadir=/var/lib/mysql (this path is the path of mysql initialization, different paths for installing mysql are different, my path is / usr/local/mysql/data)
If not, the following problems may occur when restoring data:
The code is as follows:
Xtrabackup: Error: Please set parameter 'datadir'
Innobackupex: fatal error: no 'mysqld' group in MySQL options
Innobackupex: fatal error: OR no 'datadir' option in group' mysqld' in MySQLoptions
If the following error message is reported when entering innobackupex-related commands:
Can't locateTime/HiRes.pm in @ INC (@ INC contains: / usr/local/lib64/perl5/usr/local/share/perl5 / usr/lib64/perl5/vendor_perl/usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At/usr/local/bin/pt-query-digest line 3187.
BEGINfailed--compilation aborted at / usr/local/bin/pt-query-digest line 3187.
Solution:
.pm is actually a package of Perl, and you only need to install the following packages:
Yum install-yperl-Time-HiRes
III. Backup and restore of all databases
1. Backup all databases
The code is as follows:
# innobackupex-defaults-file=/etc/my.cnf-user=root / home/tank/backup
If your mysql library has a password, you must add an entry-password='password'.
/
If the following error is reported:
The code is as follows:
InnoDB: Error: logfile. / ib_logfile0 is of different size 5242880 bytes
InnoDB: than specified in the .cnf file 50331648 bytes!
Innobackupex: Error: The xtrabackup child process has died at/usr/bin/innobackupex line 2679.
Solution: add the following to the mysqld of my.cnf:
The code is as follows:
[mysqld]
Innodb_log_file_size = 5m
2. Single database backup
The code is as follows:
# innobackupex-defaults-file=/etc/my.cnf-user=root-database=backup_test/home/tank/backup/
It has been verified that there is no problem with backing up a single library. The disadvantage is that when restoring, the data directory must be empty. In other words, during the recovery phase, you need to rename the existing data directory to a temporary directory; after the restore, copy the contents of other libraries under the original data directory (now the temporary directory) to the data directory.
3. Backup and compress the database
The code is as follows:
# innobackupex-- defaults-file=/etc/my.cnf-- user=root-- stream=tar / home/tank/backup | gzip > / home/tank/backup/ `date +% F _% Hmuri% MMI% S`.tar.gz
4. Full data restore
The code is as follows:
# / etc/init.d/mysqld stop / / stop mysql
# mv / var/lib/mysql / var/lib/mysql_bak / / data directory backup
# mkdir-p / var/lib/mysql / / rebuild the data directory
The command of the / /-- apply-log option is to prepare to start the mysql service on a backup
# innobackupex-defaults-file=/etc/my.cnf-user=root-apply-log/home/tank/backup/2014-09-18 16-35-12
/ /-- the command of the copy-back option copies data, indexes, and logs from the backup directory to the initial location specified in the my.cnf file
# innobackupex-defaults-file=/etc/my.cnf-user=root-copy-back/home/tank/backup/2014-09-18 16-35-12
# chown-R mysql.mysql / var/lib/mysql / / change the file ownership
# / etc/init.d/mysqld stop / / start mysql
IV. Incremental backup and restore
1. Create test databases and tables
The code is as follows:
Create database backup_test; / / create a library
CREATE TABLE `backup` (/ / create a table
`id`int (11) NOT NULL AUTO_INCREMENT
`name`varchar (20) NOT NULL DEFAULT''
`create_ time`timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
`del`tinyint (1) NOT NULL DEFAULT'0'
PRIMARY KEY (`id`)
) ENGINE=myisam DEFAULT CHARSET=utf8 AUTO_INCREMENT=1
2. Incremental backup
The code is as follows:
Mysql > INSERT INTO backup (name) VALUES ('tank'), (' tank1'); / / insert data
# innobackupex-defaults-file=/etc/my.cnf-user=root--incremental-basedir=/home/tank/backup/2014-09-18 million 16-35-12-incremental/home/tank/backup/
3. Make incremental backup on the basis of increment
The code is as follows:
Mysql > INSERT INTO backup (name) VALUES ('tank2'), (' tank3'), ('tank4'), (' tank5'), ('tank6'); / / inserting data
/ / 2014-09-18 18-05-20 this directory, the directory generated by the last incremental backup
# innobackupex-defaults-file=/etc/my.cnf-user=root--incremental-basedir=/home/tank/backup/2014-09-18 October 18-05-20-incremental / home/tank/backup/
View the incremental backup record file:
The code is as follows:
[root@localhost 2014-09-18 16-35-12] # cat xtrabackup_checkpoints / / Files in the complete directory
Backup_type = full-prepared
From_lsn = 0 / / full starting from 0
To_lsn = 23853959
Last_lsn = 23853959
Compact = 0
[root@localhost 2014-09-18 18-05-20] # cat xtrabackup_checkpoints / / Files in the directory for the first incremental backup
Backup_type = incremental
From_lsn = 23853959
To_lsn = 23854112
Last_lsn = 23854112
Compact = 0
[root@localhost 2014-09-18 18-11-43] # cat xtrabackup_checkpoints / / Files in the second incremental backup directory
Backup_type = incremental
From_lsn = 23854112
To_lsn = 23854712
Last_lsn = 23854712
Compact = 0
After the incremental backup is done, delete the backup_test database so that drop databasebackup_test; can be compared after the restore.
4. Incremental restore
The code is as follows:
# innobackupex-defaults-file=/etc/my.cnf-user=root-apply-log-redo-only/home/tank/backup/2014-09-18 16-35-12
# innobackupex-- defaults-file=/etc/my.cnf-- user=root-- apply-log-- redo-only/home/tank/backup/2014-09-18 years 16-35-12--incremental-dir=/home/tank/backup/2014-09-18 years 18-05-20
# innobackupex-- defaults-file=/etc/my.cnf-- user=root-- apply-log-- redo-only/home/tank/backup/2014-09-18 years 16-35-12--incremental-dir=/home/tank/backup/2014-09-18 years 18-11-43
There are three directories here:
1), / home/tank/backup/2014-09-18 16-35-12, full backup directory
2), / home/tank/backup/2014-09-18 18-05-20, the directory generated by the first incremental backup
3), / home/tank/backup/2014-09-18 18-11-43, the directory generated by the second incremental backup
At this point, the incremental restore is not over, and the most important step is to carry out a full restore. Stop the database, delete / var/lib/mysql, and restore.
The principle of incremental backup is to integrate the data under the incremental directory into an all-variable directory, and then restore the full amount of data.
Generally speaking, innobackupex is fast and supports innodb,myisam, but it is not very convenient to use. It is unreasonable to restore all databases in both single database restore and incremental backup restore.
You can write a script here for backup (backup.sh)
#! / bin/sh
Echo "start backup." `date`
Log=zztx01_ `date +% y% m% d% H% M`.log
Str=zztx01_ `date +% y% m% d% H% M`.tar.gz
Innobackupex--user=root-password=123456-defaults-file=/etc/my.cnf-database=zztx--stream=tar / data/back_data/ 2 > / data/back_data/$log | gzip1 > / data/back_data/$str
Echo "backup completed..." `date`
Description:
-- database=zztx backs up the zztx database separately. If you don't add this parameter, you will back up the whole database.
2 > / data/back_data/zztx.log output information is written to the log
1 > / data/back_data/zztx.tar.gz is packaged and compressed and stored in this file
Description of common parameters of innobackup
-- defaults-file
Same as xtrabackup-- defaults-file parameter
-- apply-log
Encapsulation of xtrabackup-- prepare parameters
-- copy-back
Copy the backup data files to the datadir of the MySQL server during data recovery
-- remote-host=HOSTNAME
Store backup data on the process server through ssh
-- stream= [tar]
Backup file output format, tar using tar4ibd, the file can be obtained in the XtarBackupbinary file. If-stream=tar is specified at the time of the backup, the directory where the tar4ibd file is located must be in $PATH (because you are using tar4ibd unzipping, which is available in XtraBackup's binary package).
In the use of parameter stream=tar backup, your xtrabackup_logfile may be temporarily placed in the / tmp directory, if you backup and write large xtrabackup_logfile may be very large (5G +), it is likely to fill your / tmp directory, you can use the parameter-tmpdir to specify the directory to solve this problem.
-- tmpdir=DIRECTORY
When-- remote-hostor-- stream is specified, the directory where the transaction log is temporarily stored defaults to the temporary directory tmpdir specified in the MySQL configuration file
-redo-only-apply-log group
Only redo is forced to back up the log, skipping rollback. This is necessary when doing incremental backups.
-- use-memory=#
This parameter is used in prepare to control the amount of memory used by innodb instances in prepare.
-- throttle=IOS
Same as xtrabackup-- throttle parameter
-- sleep= is used by ibbackup to specify how many milliseconds the process stops copying each time 1m of data is backed up. It is also to minimize the impact on normal business during backup. For more information, please see the manual of ibbackup.
-- compress [= LEVEL]
Compression of backup data rows, which only supports ibbackup,xtrabackup, has not been implemented yet
-- include=REGEXP
The encapsulation of the xtrabackup parameter, tables, also supports ibbackup. Backup contains library tables, for example:-- include= "test.*", which means backing up all tables in the test library. If you need a full backup, omit this parameter; if you need to back up the two tables under the test library: test1 and test2, write:-- include= "test.test1 | test.test2". You can also use wildcards, such as:-- include= "test.test*".
-- databases=LIST
List the databases that needs to be backed up. If this parameter is not specified, all database containing MyISAM and InnoDB tables will be backed up
-- uncompress
Decompress the backup data file and support ibbackup,xtrabackup. This function has not been implemented yet.
-- slave-info
Backup slave library, plus-slave-info backup directory will generate an additional xtrabackup_slave_info file, where the master log file and offset will be saved, the file content is similar to: CHANGE MASTER TOMASTER_LOG_FILE='', MASTER_LOG_POS=0
-- socket=SOCKET
Specify the location of the mysql.sock so that the backup process logs in to mysql. Exe.
At this point, the study on "how to backup and restore big data in MySQL" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.