In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following is mainly about how to hot backup and restore mysql using xtrabackup tools. I hope these contents can bring you practical use, which is also the main purpose of this article on how to hot prepare and restore mysql using xtrabackup tools. All right, don't talk too much nonsense, let's just read the following.
Introduction to Hot backup and recovery 3.1xtrabackup:
Hot backup is also used to directly copy physical data files, just like cold backup, but hot backup can be copied directly without downtime, which is generally used for important core businesses that are uninterrupted 24 hours a day. Mysql's community version of hot backup tool InnoDB Hot Backup is paid for and can only be tried for 30 days before the commercial version can be used permanently.
Percona has released a xtrabackup hot backup tool, which, like paid tools, supports online hot backup (backup does not affect the reading and writing of data) is a good alternative to InnoDB Hot Backup, a commercial tool.
Xtrabackup Hot standby tool is an open source tool that can back up and restore mysql databases very quickly. There are two tools in Xrabackup:
Xtrabackup is suitable for hot backup of data in InnoDB and XtraDB tables. It cannot back up other types of tables or data table structures.
Innobakupex is a perl script that encapsulates xtrabackup. It provides the ability to back up MyISAM tables. Because innobakupex is more functional, innobakupex is generally used for backup, but a read lock is needed when dealing with myisam. And added some options to use. For example, slave-info can record some information needed by slave after backup and recovery. According to this information, it is convenient to use backup to redo slave.
What can Xtrabackup do:
Online (hot) backup of the InnoDB and XtraDB tables of the entire library
Make an incremental backup (innodbonly) based on the last full library backup of xtrabackup
Generate a backup as a stream, which can be saved directly to a remote machine (useful when the local hard disk is out of space)
The tools provided by the MySQL database itself do not support true incremental backups, and binary log recovery is a point-in-time (point-in-time) recovery rather than an incremental backup. The Xtrabackup tool supports incremental backups of the InnoDB storage engine, which works as follows:
(1) first complete a full backup and record the LSN (LogSequenceNumber) of the checkpoint at this time.
(2) during the process incremental backup, compare whether the LSN of each page in the tablespace is greater than the LSN of the last backup, and if so, back up the page and record the LSN of the current checkpoint.
First, find and record the last checkpoint ("last checkpoint LSN") in logfile, then start copying the logfile of InnoDB to xtrabackup_logfile; from the location of InnoDB, and then start copying all the data files .ibd; don't stop copying logfile until you have finished copying all data files.
Because all data modifications are recorded in logfile, even if the data file is modified during backup, you can still maintain data consistency by parsing xtrabackup_logfile during recovery.
Principle of 3.2Xtrabackup backup
XtraBackup is based on the crash-recovery function of InnoDB. It will copy the datafile of innodb, because the table is not locked, the copied data is inconsistent, and crash-recovery is used during recovery to make the data consistent.
InnoDB maintains a redo log, also known as transactionlog, a transaction log that contains all changes to innodb data. When InnoDB starts, it checks data file and transaction log first, and does two steps:
When XtraBackup backs up, it copies innodb data page by page without locking the table. At the same time, XtraBackup has another thread monitoring transactions log, and once the log changes, it copies the changed logpages away. Why the rush to copy away? Because the size of the transactions log file is limited, when it is full, it will start all over again, so the new data may overwrite the old data.
During the prepare process, XtraBackup uses the copied transactions log to crash recovery the backed up innodb data file.
3.3 implementation details
XtraBackup opens the data file of innodb in read-write mode, and then copies it. In fact, it will not modify this file. In other words, the user running XtraBackup must have read and write access to the data file of innodb. The reason for adopting read-write mode is that XtraBackup uses its built-in innodb library to open files, while the innodb library opens files with rw.
XtraBackup replicates large amounts of data from the file system, so it uses posix_fadvise () as much as possible to tell OS not to cache read data to improve performance. Because the data will not be reused, OS is not so smart. If you want to cache for a while, a few gigabytes of data will put a lot of pressure on the virtual memory of OS, and other processes, such as mysqld, are likely to be swap, so the system will be greatly affected.
In the process of backing up innodbpage, XtraBackup reads and writes 1MB data, 1MB/16KB=64 page at a time. This is not configurable. After reading the 1MB data, XtraBackup traverses the 1MB data page by page, and uses the buf_page_is_corrupted () function of innodb to check whether the data on this page is normal. If the data is abnormal, reread the page, up to 10 times. If it still fails, the backup fails and exits. When copying transactionslog, read and write the data of 512KB each time. It is also not configurable.
Installation of 3.4xtrabackup tools
1 Software download address
Http://www.percona.com/downloads/ software download address
2 download the binary package directly and install it
Do not compile the installation, the compilation installation needs to cooperate with the MySQL source package.
Wget http://www.percona.com/downloads/XtraBackup/XtraBackup-2.1.9/binary/Linux/x86_64/percona-xtrabackup-2.1.9-744-Linux-x86_64.tar.gz
Tar zxvf percona-xtrabackup-2.1.9-744-Linux-x86_64.tar.gz
[root@localhost bin] # cd/opt/percona-xtrabackup-2.1.9-Linux-x86_64/bin/
[root@localhostpercona-xtrabackup-2.1.9-Linux-x86_64] # ll bin/
The total dosage is 114072
-rwxr-xr-x 1 root root 168821 May 2 2014 innobackupex
Lrwxrwxrwx 1 root root 12 April 4 10:11 innobackupex-1.5.1-> innobackupex
-rwxr-xr-x 1 root root 2226551 May 2 2014 xbcrypt
-rwxr-xr-x 1 root root 2300304 May 2 2014 xbstream
-rwxr-xr-x 1 root root 13178050 May 2 2014 xtrabackup
-rwxr-xr-x 1 root root 16598470 May 2 2014 xtrabackup_55
-rwxr-xr-x 1 root root 82321834 May 2 2014 xtrabackup_56
Cp innobackupex-1.5.1 / usr/bin/innobackupex
Cp xtrabackup_55 / usr/bin/xtrabackup
Cp xtrabackup / usr/bin
Install related plug-ins
Yum install perl-DBI-y
Yum install perl-DBD-MySQL-y
Yum install perl-Time-HiRes-y
Yum install perl-IO-Socket-SSL-y
Yum install perl-TermReadKey.x86_64-y
3.5 description of innobackupex parameters:
-- defaults-file=/etc/my.cnf: specify the location of the my.cnf profile
-- user=root: specifies the user name of the linked database
-- apply-log: encapsulation of the-prepare parameter of xtrabackup
-- copy-back: copy the backup files to the datadir directory of the mysql CVM during data recovery
-- remote-host: store backup data on a remote CVM via ssh
-- stream: outputs the backed up data to standard output through the specified data format
-- tmpdir: when the-remote-host or-stream parameter is specified, the transaction log needs to be temporarily stored on the local disk. This parameter uses the configuration of mysql CVM by default.
-- use-memory: this parameter is used in conjunction with ibbackup and is similar to the parameter use-memory parameter of xrtabackup.
-- throttle=IOS: the same parameter throttle as xtrabackup
-- sleep: for ibbackup, specify how many seconds the process stops for each 1m of data backed up, in order to reduce the impact on normal business. Check the ibbackup manual for details.
-compass: compress the backed-up data. Only supporting ibbackup,xtrabackup has not been implemented yet.
-- include=REGEXP encapsulates xtrabackup parameters and also supports ibbackup
-- database=LIST: lists the databases to be backed up. If this parameter is not specified, all database containing Myisam and innoDB tables will be backed up.
-- password= "*": the user password for accessing mysql. The-password parameter is omitted when the root password of mysql is empty.
PORT=3306: specifies the port on which mysql listens
-- slave-info:slave-info can record backup recovery, as some information needed by slave, according to this information, you can easily use backup to redo slave.
/ mysql_backup: backup location
-- socket=SOCKET: location of the socket file of the mysql CVM
2 > / mysql_backup/ innobackupex.log: record the output of the backup
-- databases= database name: use this parameter to back up a database. If you don't add this parameter, it will back up all the databases by default.
Innobackupex-defaults-file= / data/mysqldata/3307/my.cnf--user=root-password= zywerj@11erwe-databases= "cooolyisydb" / backup/full_data/
3.6 an example demonstrates the operation:
Innobackupex-defaults-file= / data/mysqldata/3307/my.cnf--user=root-password= zywerj@11erwe-databases= "cooolyisydb" / backup/full_data/
[root@localhost backup] # innobackupex-defaults-file= / data/mysqldata/3307/my.cnf--user=root-password=zywerj@11erwe-databases= "cooolyisydb" / backup/full_data/
Can't locate Time/HiRes.pm in @ INC (@ INC contains:/usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl/usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At/usr/bin/innobackupex line 23.
BEGIN failed--compilation aborted at/usr/bin/innobackupex line 23.
Error reporting solution:
Yum-y install perl-Time-HiRes
Reference: http://blog.csdn.net/lwei_998/article/details/40896321
1. Single library backup and recovery process:
Create a directory where the backup data is stored:
Mkdir-p/backup/ {full_data,dk_data,zl_data}
Note: full_data full library backup directory
Dk_data single library backup directory
Zl_data incremental backup directory
[root@localhost] # time innobackupex--defaults-file=/etc/my.cnf-user=root-password=Zy@634#R0ot--socket=/tmp/mysql.sock-port=3306-databases= "mahtu"-slave-info/backup/full_data/
InnoDB Backup Utility v1.5.1Murxtrabackup; Copyright2003, 2009 Innobase Oy
And Percona LLC and/or its affiliates2009-2013. All Rights Reserved.
This software is published under
The GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup,documentation, and help resources:
Http://www.percona.com/xb/p
170404 13:28:55 innobackupex: Connecting to MySQL server with DSN'dbi:mysql:;mysql_read_default_file=/etc/my.cnf;mysql_read_default_group=xtrabackup;port=3306;mysql_socket=/tmp/mysql.sock'as' root' (using password: YES).
170404 13:28:55 innobackupex: Connected to MySQL server
170404 13:28:55 innobackupex: Executing a version check against the server...
170404 13:28:58 innobackupex: Done.
IMPORTANT: Please check that the backup runcompletes successfully.
Atthe end of a successful backup run innobackupex
Prints "completed OK!".
Innobackupex: Using mysql server version 5.5.48
Innobackupex: Created backup directory / backup/full_data/2017-04-04-04 13-28-58
170404 13:28:58 innobackupex: Starting ibbackup with command: xtrabackup_55-- defaults-file= "/ etc/my.cnf"-- defaults-group= "mysqld"-- backup--suspend-at-end-- target-dir=/backup/full_data/2017-04-04 October 13-28-58-- tmpdir=/tmp
Innobackupex: Waiting for ibbackup (pid=26398) tosuspend
Innobackupex: Suspend file'/backup/full_data/2017-04-04 December 13-28-58 Universe xtrabackupwards suspendedcards 2'
Xtrabackup_55 version 2.1.9 for Percona Server5.5.35 Linux (x86 / 64) (revision id: 744)
Xtrabackup: uses posix_fadvise ().
Xtrabackup: cd to / data/mysql/
Xtrabackup: open files limit requested 65535, setto 65535
Xtrabackup: using the following InnoDBconfiguration:
Xtrabackup: innodb_data_home_dir = / usr/local/mysql/var
Xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
Xtrabackup: innodb_log_group_home_dir = / usr/local/mysql/var
Xtrabackup: innodb_log_files_in_group = 2
Xtrabackup: innodb_log_file_size = 268435456
> log scanned up to (619371504)
[01] Copying/usr/local/mysql/var/ibdata1 to / backup/full_data/2017-04-04-04 13-28-58/ibdata1
> log scanned up to (619371504)
...
> log scanned up to (619371544)
[01]... done
> log scanned up to (619371544)
Xtrabackup: Creating suspend file'/backup/full_data/2017-04-04 October 13-28-58 Universe xtrabackupwards suspendedcards 2 'with pid' 26398'
170404 13:29:13 innobackupex: Continuing after ibbackup has suspended
170404 13:29:13 innobackupex: Starting to lock all tables...
> log scanned up to (619371544)
170404 13:29:13 innobackupex: All tables locked and flushed to disk
170404 13:29:13 innobackupex: Starting to backup non-InnoDB tables and files
Innobackupex: in subdirectories of'/ data/mysql/'
Innobackupex: Backing up files'/data/mysql//mahtu/*. {frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}'(747 files)
> log scanned up to (619371544)
.
> log scanned up to (619371544)
170404 13:29:19 innobackupex: Finished backing up non-InnoDB tables and files
170404 13:29:19 innobackupex: Waiting for log copying to finish
Xtrabackup: The latest check point (forincremental): '619371544'
Xtrabackup: Stopping log copying thread.
. > log scanned up to (619371544)
Xtrabackup: Creating suspend file'/backup/full_data/2017-04-04 October 13-28-58 Universe xtrabackupdated logically copided 'with pid' 26398'
Xtrabackup: Transaction log of lsn (619371135) to (619371544) was copied.
170404 13:29:20 innobackupex: All tables unlocked
Innobackupex: Backup created in directory'/backup/full_data/2017-04-04 / 13-28-58'
Innobackupex: MySQL slave binlog position: masterhost '192.168.10.20, filename' mysql-bin.000001', position 35593612
170404 13:29:20 innobackupex: Connection to database server closed
170404 13:29:20 innobackupex: completed OK!
Real 0m25.738s
User 0m2.765s
Sys 0m3.927s
Error message for backup data recovery:
[root@localhost 2017-04-04 / 46] # innobackupex-- defaults-file=/etc/my.cnf-- copy-back / backup/dk_data/2017-04-04 / 16-04-46 /
InnoDB Backup Utility v1.5.1Murxtrabackup; Copyright2003, 2009 Innobase Oy
And Percona LLC and/or its affiliates2009-2013. All Rights Reserved.
This software is published under
The GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup,documentation, and help resources:
Http://www.percona.com/xb/p
IMPORTANT: Please check that the copy-back runcompletes successfully.
Atthe end of a successful copy-back run innobackupex
Prints "completed OK!".
Innobackupex: Error: Original data directory'/data/mysql/' is not empty! At / usr/bin/innobackupex line 2113.
Root@localhost mysql] # ps-ef | grep "/ data/mysql/" | grep-v grep
Root 28100 10 14:01 pts/1 00:00:00 / bin/sh/usr/local/mysql/bin/mysqld_safe-datadir=/data/mysql/--pid-file=/data/mysql//localhost.localdomain.pid
Mysql 28598 28100 29 14:01 pts/1 01:21:31 / usr/local/mysql/bin/mysqld-basedir=/usr/local/mysql--datadir=/data/mysql/-plugin-dir=/usr/local/mysql/lib/plugin-user=mysql--log-error=/data/mysql//localhost.localdomain.err-open-files-limit=65535--pid-file=/data/mysql//localhost.localdomain.pid-socket=/tmp/mysql.sock--port=3306
The reason is that when xtrabackup recovers the backup data of mysql, the mysql environment must be a brand new environment without any data.
The data directory is / data/mysql. Before restoring, you must remove all the files in this directory, and then move the file / usr/local/mysql/var/ibdata1 to another directory, so that the backed-up database can be restored normally.
The following is the database cooolyisydb that restored the backup:
[root@localhost var] # time innobackupex--defaults-file=/etc/my.cnf-- copy-back / backup/dk_data/2017-04-04 / 16-04-46 /
InnoDB Backup Utility v1.5.1 InnoDB Backup Utility xtrabackup; Copyright 2003, 2009 Innobase Oy
And Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
The GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help resources:
Http://www.percona.com/xb/p
IMPORTANT: Please check that the copy-back run completes successfully.
At the end of a successful copy-back run innobackupex
Prints "completed OK!".
Innobackupex: Starting to copy files in'/ backup/dk_data/2017-04-04-04 16-04-46
Innobackupex: back to original data directory'/ data/mysql/'
Innobackupex: Copying'/ backup/dk_data/2017-04-04-04 16-04-46 data/mysql/xtrabackup_slave_info'
Innobackupex: Creating directory'/ data/mysql/cooolyisydb'
..
Innobackupex: Starting to copy InnoDB log files
Innobackupex: in'/backup/dk_data/2017-04-04 / 16-04-46'
Innobackupex: back to original InnoDB log directory'/usr/local/mysql/var'
Innobackupex: Finished copying back files.
170404 17:03:35 innobackupex: completed OK!
Real 5m25.809s
User 0m0.303s
Sys 0m13.316s
[root@localhost ~] # time mysqldump-uroot-p-B mahtu > test_mahtu.sql
Enter password:
Real 0m36.932s
User 0m10.967s
Sys 0m1.337s
It is concluded that innobackupex backup is faster than mysqldump backup.
3.7 compressed backup
Xbstream Packaging:
[root@localhost bin] # innobackupex-defaults-file=/etc/my.cnf-user=root--password=Zy@66#erot-stream=xbstream-databases= "cooolyisydb" / backup/full_data/ > / backup/full_data/backup.xbstream
-- compress compression parameters
[root@localhost full_data] # innobackupex-defaults-file=/etc/my.cnf-user=root--password=Zy@66##erot-stream=xbstream-compress--databases= "cooolyisydb" / backup/full_data/ > / backup/full_data/backup.xbstream
Tar compression method:
Innobackupex-defaults-file=/data/3307/my.cnf-user=system--password=test3306-databases= "oldboy xtra_test"-stream=tar/backup/data/yasuo/ | gzip > / backup/data/yasuo/test.dir_$ (date +% F) .tar.gz
[root@localhost full_data] # innobackupex-defaults-file=/etc/my.cnf-user=root--password=Zy@66##erot-stream=tar-databases= "cooolyisydb" / backup/full_data/ | gzip > / backup/full_data/cooolyisydb.$ (date +% F) .tar.gz
3.8 how to decompress the backup:
Xbstream decompression backup method:
[root@db02 yasuo] # xbstream-x
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.