In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to achieve mysql database backup, the article is very detailed, has a certain reference value, interested friends must read it!
This paper describes a variety of implementation methods of mysql database backup. Share with you for your reference, the details are as follows:
First, use mysqldump for backup
1. Complete backup of all databases
Mysqldump-u root-p-- all-databases > E:/all.sql
Before mysql8, stored procedures and events were stored in the mysql.proc and mysql.event tables.
Starting with mysql8, the definitions of the corresponding objects are stored in the data dictionary, and these tables are not backed up.
To include stored procedures and events, use the following statement:
Mysqldump-u root-p-- all-databases-- routines-- events > E:/all.sql
2. Point-in-time recovery
To get a point-in-time recovery, you should specify-- single-transaction and-- master-data
Before backing up, single-transaction sets the transaction isolation level to REPEATABLE READ mode and executes START TRANSACTION to provide a consistent backup.
-- master-data outputs the location of the server's binary log to the sql file.
Mysqldump-u root-p-- all-databases-- routines-- events-- single-transaction-- master-data > E:/all.sql
-- master-data = 2 means that during the export process, record the binlog and pos points of the current library and comment this line in the export file.
-- master-data = 1 means that during the export process, the binlog and pos points of the current library are recorded, and this line is not commented in the export file.
3. Record the binary log location of the main library when exporting from the library
Mysqldump-u root-p-- all-databases-- routines-- events-- single-transaction-- dump-slave > E:/all.sql
-- dump-slave = 2 means that during the export process, record the binlog and pos points of the main library and comment this line in the export file.
-- dump-slave = 1 means that during the export process, the binlog and pos points of the main library are recorded, and this line is not commented in the export file.
4. Specify database and table export
Mysqldump-u root-p-- databases Database > E:/bak.sqlmysqldump-u root-p-- databases Database-- tables data Table > E:/bak.sql
5. Ignore tables
Mysqldump-u root-p-- databases database-- ignore-table= database. Data sheet > E:/bak.sql
6. Specify a row
Mysqldump-u root-p-- databases database-- tables data sheet-- where= "conditions" > E:/bak.sql
Or use limit to restrict the result set
Mysqldump-u root-p-- databases database-- tables data table-- where= "conditional LIMIT number" > E:/bak.sql
7. Export remote server
Mysqldump-u root-p-h host IP-- all-databases-- routines-- events-- triggers > E:/all.sql
8. Backup for merging data with other servers
Mysqldump-u root-p-- databases database-- skip-add-drop-table-- replace > E:/bak.sql
-- skip-add-drop-table: drop table statements are not written to the export file.
-- replace: will be exported using the replace into statement instead of the insert statement.
Second, use mysqlpump for backup
1. Parallel processing to speed up the backup process by specifying the number of threads
Mysqlpump-- default-parallelism=8 > E:/all.sql
2. You can also specify the number of threads per database
Mysqlpump-u root-p-- parallel-schemas=4: database-- default-parallelism=2 > E:/all.sql
3. Exclude or include databases
Mysqlpump-u root-p-- include-databases=%t > E:/bak.sql
Back up all databases ending in t, multiple databases are separated by commas, and database names can use% or _ wildcards.
In addition, there are similar-- include-events,--include-routines,--include-tables,--include-triggers,--include-users and so on.
Mysqlpump-u root-p-- exclude-databases=a% > E:/bak.sql
Exclude databases that start with a for backup, multiple databases are separated by commas, and database names can use% or _ wildcards.
In addition, there are similar-- exclude-events,--exclude-routines,--exclude-tables,--exclude-triggers,--exclude-users and so on.
4. Backup users
Mysqlpump-u root-p-- exclude-databases=%-- users > E:/user.sql
You can exclude some users by using-- exclude-users
Mysqlpump-exclude-databases=%-exclude-users=root-users > E:/user.sql
5. Compress backup
By using-- compress-output = lz4 or-- compress-output = zlib
Mysqlpump-u root-p-- compress-output=lz4 > E:/all.lz4mysqlpump-u root-p-- compress-output=zlib > E:/all.zlib
Extract it with the following statement
Lz4_decompress E:/all.lz4 all.sqlzlib_decompress E:/all.zlib all.sql
Third, use mydumper for backup
Mydumper needs to be installed separately. Official website: https://github.com/maxbube/mydumper/releases
1. Full backup
Mydumper-u root-- password= password-- outputdir export path
2. Back up separate tables
Mydumper-u root-- password= password-- B database-T data sheet-- triggers-- events-- routines-- outputdir export path
3. Use regular expressions to back up specific databases
Mydumper-u root-- passoword= password-- regex'^ (?! (mysql | test))'- outputdir export path
Exclude mysql and test databases from backups.
4. Back up the large table
Mydumper-u root-- password= password-B database-T data sheet-- triggers-- events-- routines-- rows=100000-t 8-- trx-consistency-only-- outputdir export path
-- rows indicates how many rows the table is divided into blocks
-- trx-consistency-only, if it is innodb, minimizes locking.
-t specifies the number of threads
5. Compress backup
Mydumper-u root-- password= password-B database-T data sheet-t 8-- trx-consistency-only-- compress-- outputdir export path
6. Back up data only
Skip schema and back up only data with the-- no-schemas option
Mydumper-u root-- password= password-B database-T data sheet-t 8-- no-schemas-- compress-- trx-consistency-only-- outputdir export path
Fourth, use ordinary files for backup
You can back up by directly copying the files in the data directory by closing mysql, copying the files, and then starting mysql.
Fifth, use xtrabackup for backup
Https://www.percona.com/downloads/XtraBackup/LATEST/
1. Full backup
Xtrabackup-- defaults-file=/etc/my.cnf-- host= host IP-- user= username-- password= password-- port= port-- backup-- parallel=3-- target-dir= backup directory
-- defaults-file database configuration file
-- backup performs backup operation
-- number of concurrent threads during parallel backup
-- directory of target-dir backup files
2. Incremental backup
Xtrabackup-- defaults-file=/etc/my.cnf\
-- host= host IP\
-- user= user name\
-- password= password\
-- port=3306\
-- backup\
-- parallel=3\
-- target-dir= incremental backup directory\
-- incremental-basedir= full backup directory\
Incremental backup is based on full backup-- incremental-basedir points to full backup directory
The above is all the contents of the article "how to achieve mysql database backup". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.