Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL backup and restore one

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

This paper discusses the backup and recovery mechanism of MySQL and how to maintain data tables, including the two main table types: MyISAM and Innodb. The version of MySQL designed in this paper is 5.0.22.

At present, the free backup tools supported by MySQL are: mysqldump, mysqlhotcopy, and you can also back up with SQL syntax: BACKUP TABLE or SELECT INTO OUTFILE, or backup binary log (binlog), or directly copy data files and related configuration files. The MyISAM table is saved as a file, so it is relatively easy to back up, and all of the methods mentioned above can be used. Innodb all tables are saved in the same data file ibdata1 (it may be multiple files, or separate tablespace files), which is relatively difficult to back up. The free solution can be to copy data files, back up binlog, or use mysqldump.

[@ more@] 1, mysqldump1.1 backup

Mysqldump uses a SQL-level backup mechanism, which exports data tables into SQL script files, which is relatively appropriate when upgrading between different MySQL versions, which is also the most commonly used backup method.

Now let's talk about some of the main parameters of mysqldump:

-- compatible=name

It tells mysqldump which database or older version of the MySQL server the exported data will be compatible with. The values can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_tables_options, no_field_options, and so on. Use several values, separated by commas. Of course, it is not guaranteed to be fully compatible, but as compatible as possible.

-- complete-insert,-c

The exported data uses a full INSERT with field names, that is, all values are written on one line. Doing so can improve the insertion efficiency, but may be affected by the max_allowed_packet parameter and cause the insert to fail. Therefore, you need to use this parameter with caution, or at least I don't recommend it.

-- default-character-set=charset

Specify which character set to use when exporting data. If the data table does not use the default latin1 character set, you must specify this option when exporting, otherwise it will cause garbled problems when you import the data again.

-- disable-keys

Tell mysqldump to add / *! 40000 ALTER TABLE table DISABLE KEYS * /; and / *! 40000 ALTER TABLE table ENABLE KEYS * /; statements to the beginning and end of the INSERT statement, which greatly improves the speed of inserting the statement because it rebuilds the index after all the data has been inserted. This option applies only to MyISAM tables.

-- extended-insert = true | false

By default, mysqldump is on-- complete-insert mode, so if you don't want to use it, use this option and set its value to false.

-- hex-blob

Export binary string fields in hexadecimal format. You must use this option if you have binary data. The field types affected are BINARY, VARBINARY, and BLOB.

-- lock-all-tables,-x

Before starting the export, the submit request locks all tables in all databases to ensure data consistency. This is a global read lock and automatically turns off the-- single-transaction and-- lock-tables options.

-- lock-tables

It is similar to-- lock-all-tables, except that it locks the currently exported data table, rather than locking all the tables under the library at once. This option applies only to MyISAM tables. If it is an Innodb table, you can use the-- single-transaction option.

-- no-create-info,-t

Only export data without adding CREATE TABLE statements.

-- no-data,-d

No data is exported, only database table structure is exported.

-- opt

This is just a shortcut option, which is equivalent to adding the-- add-drop-tables-- add-locking-- create-option-- disable-keys-- extended-insert-- lock-tables-- quick-- set-charset option at the same time. This option allows mysqldump to export data quickly, and the exported data can be imported back quickly. This option is on by default, but can be disabled with-- skip-opt. Note that if running mysqldump does not specify the-- quick or-- opt option, the entire result set is placed in memory. Problems may occur if you export a large database.

-- quick,-q

This option is useful when exporting large tables, forcing mysqldump to take records directly from the server query instead of caching them in memory after getting all records.

-- routines,-R

Export stored procedures and custom functions.

-- single-transaction

This option submits a BEGIN SQL statement before exporting the data. BEGIN does not block any applications and ensures the consistent state of the database at the time of export. It applies only to transaction tables, such as InnoDB and BDB.

This option and the-- lock-tables option are mutually exclusive because LOCK TABLES will implicitly commit any pending transaction.

If you want to export a large table, use the-- quick option together.

-- triggers

Export triggers at the same time. This option is enabled by default and disabled with-- skip-triggers.

For details of other parameters, please refer to the manual. I usually use the following SQL to back up the MyISAM table:

/ usr/local/mysql/bin/mysqldump-uyejr-pyejr-- default-character-set=utf8-- opt-- extended-insert=false-- triggers-R-- hex-blob-x db_name > db_name.sql

Use the following SQL to back up the Innodb table:

/ usr/local/mysql/bin/mysqldump-uyejr-pyejr-default-character-set=utf8-- opt-- extended-insert=false-- triggers-R-- hex-blob-- single-transaction db_name > db_name.sql

In addition, if you want to implement online backup, you can also use the-- master-data parameter, as follows:

/ usr/local/mysql/bin/mysqldump-uyejr-pyejr-default-character-set=utf8-opt-master-data=1-single-transaction-flush-logs db_name > db_name.sql

It just requests to lock the table at the beginning, then refreshes the binlog, and then adds the CHANGE MASTER statement to the exported file to specify the binlog location of the current backup. If you want to restore the file to slave, you can do it this way.

1.2 restore

The file backed up with mysqldump is a SQL script that can be poured directly, and there are two ways to import the data.

Directly use the mysql client

For example:

/ usr/local/mysql/bin/mysql-uyejr-pyejr db_name < db_name.sql use SOURCE syntax

In fact, this is not a standard SQL syntax, but a feature provided by the mysql client, such as:

SOURCE / tmp/db_name.sql

You need to specify the absolute path to the file, and it must be a file that the mysqld running user (such as nobody) has permission to read.

2. Mysqlhotcopy2.1 backup

Mysqlhotcopy is a PERL program originally written by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES, and cp or scp to quickly back up the database. It is the fastest way to back up a database or a single table, but it can only run on the machine where the database files (including data table definition files, data files, index files) are located. Mysqlhotcopy can only be used to back up MyISAM and can only be run on Unix-like and NetWare-like systems.

Mysqlhotcopy supports copying multiple databases at once, as well as regular expression. Here are a few examples:

Root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr db_name / tmp (copy database directory db_name to / tmp) root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr db_name_1. Db_name_n / tmproot#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr db_name./regex/ / tmp

For more details on how to use it, see the manual, or call the following command to see mysqlhotcopy's help:

Perldoc / usr/local/mysql/bin/mysqlhotcopy

Note that to use mysqlhotcopy, you must have SELECT, RELOAD (to execute FLUSH TABLES) permissions, and you must also be able to read the datadir/db_name directory.

2.2 restore

Mysqlhotcopy backs up the entire database directory, which can be copied directly to the datadir (in this case, / usr/local/mysql/data/) directory specified by mysqld. At the same time, pay attention to the permissions, such as the following example:

Root#cp-rf db_name / usr/local/mysql/data/root#chown-R nobody:nobody / usr/local/mysql/data/ (change the owner of the db_name directory to the mysqld running user) 3. SQL syntax backup 3.1 backup

The BACKUP TABLE syntax actually works much like mysqlhotcopy, locking the table and then copying the data file. It can achieve online backup, but the effect is not ideal, so it is not recommended. It only copies table structure files and data files, and does not copy index files at the same time, so recovery is slow.

Example:

BACK TABLE tbl_name TO'/ tmp/db_name/'

Note that you must have FILE permission to execute this SQL, and the directory / tmp/db_name/ must be writable by the mysqld user, and the exported files cannot overwrite existing files to avoid security problems.

SELECT INTO OUTFILE is to export the data into a normal text file, and you can customize the field spacing to facilitate the processing of the data.

Example:

SELECT * INTO OUTFILE'/ tmp/db_name/tbl_name.txt' FROM tbl_name

Note that you must have FILE permission to execute this SQL, and the file / tmp/db_name/tbl_name.txt must be writable by the mysqld user, and the exported file cannot overwrite the existing file to avoid security problems.

3.2 recovery

For files backed up with the BACKUP TABLE method, you can run the RESTORE TABLE statement to restore the data table.

Example:

RESTORE TABLE FROM'/ tmp/db_name/'

The permission requirements are similar to those described above.

For files backed up with the SELECT INTO OUTFILE method, you can run the LOAD DATA INFILE statement to restore the data table.

Example:

LOAD DATA INFILE'/ tmp/db_name/tbl_name.txt' INTO TABLE tbl_name

The permission requirements are similar to those described above. Before pouring in the data, the data table must already exist. If you are worried about duplicate data, you can add the REPLACE keyword to replace existing records or use the IGNORE keyword to ignore them.

4. Enable binary logging (binlog)

The binlog approach is relatively flexible, labor-saving, and can support incremental backups.

Mysqld must be restarted when binlog is enabled. First, close mysqld, open my.cnf, and add the following lines:

Server-id = 1log-bin = binloglog-bin-index = binlog.index

Then start mysqld. Binlog.000001 and binlog.index will be generated during the operation. The front file is mysqld to record all update operations to the data, and the subsequent file is the index of all binlog, which can not be easily deleted. For information about binlog, please see the manual.

When you need backup, you can first execute the SQL statement to let mysqld stop writing to the current binlog, and then you can back up the files directly. In this way, you can achieve the purpose of incremental backup:

FLUSH LOGS

If you are backing up the slave server in the replication system, you should also back up the master.info and relay-log.info files.

The backed up binlog files can be viewed using mysqlbinlog, a tool provided by MySQL, such as:

/ usr/local/mysql/bin/mysqlbinlog / tmp/binlog.000001

This tool allows you to display all SQL statements under the specified database, and can also limit the time range, quite convenient, please refer to the manual for details.

When restoring, you can do this with statements similar to the following:

/ usr/local/mysql/bin/mysqlbinlog / tmp/binlog.000001 | mysql-uyejr-pyejr db_name

Execute it directly as input to the SQL statement output from mysqlbinlog.

If you have a spare machine, you might as well back it up this way. Because the performance requirements of slave machines are not so high, so the cost is low, incremental backups can be achieved at low cost, and some of the data query pressure can be shared, so why not?

5. Back up data files directly

Compared with the previous methods, backing up data files is the most direct, fast and convenient, but the disadvantage is that incremental backup can not be realized basically. To ensure data consistency, you need to execute the following SQL statement in front of the backend file:

FLUSH TABLES WITH READ LOCK

That is, all the data in memory is flushed to disk, and the data table is locked to ensure that no new data is written during the copy process. The recovery of the data backed up by this method is also very simple and can be copied directly back to the original database directory.

Note that for the Innodb type table, you also need to back up its log file, the ib_logfile* file. Because when the Innodb table is corrupted, you can rely on these log files to recover.

6. Backup strategy

For systems with medium volume of business, the backup strategy can be set as follows: the first full backup, an incremental backup every day, and another full backup once a week, and so on. For important and busy systems, full backups may be required once a day, incremental backups per hour, or even more frequently. In order not to affect the online business, achieve online backup, and can backup incrementally, the best way is to use the master-slave replication mechanism (replication) to do backup on the slave machine.

7. Data maintenance and disaster recovery

As an DBA, one of the most important tasks is to ensure that the data table can be used safely, stably and at a high speed. Therefore, you need to maintain your data sheet on a regular basis. The following SQL statement is useful:

CHECK TABLE or REPAIR TABLE, check or maintain MyISAM table OPTIMIZE TABLE, optimize MyISAM table ANALYZE TABLE, analyze MyISAM table

Of course, all of the above commands can be started with the tool myisamchk, which is not detailed here.

The Innodb table can be defragmented to improve indexing speed by executing the following statement:

ALTER TABLE tbl_name ENGINE = Innodb

This is actually a NULL operation, ostensibly doing nothing, but actually refragmenting.

The commonly used MyISAM table can be done using the recovery method mentioned above. If the index is broken, you can use the myisamchk tool to rebuild the index. For the Innodb table, it's not so straightforward, because it keeps all the tables in one table space. However, Innodb has a checking mechanism called fuzzy checkpoint. as long as you save the log file, you can fix the error according to the log file. You can add the following parameters to the my.cnf file to have mysqld check the log file automatically at startup:

Innodb_force_recovery = 4

Please refer to the manual for information about this parameter.

8. Summary

To make a good data backup, you have to have an appropriate backup strategy, which is a small part of what a DBA does. Everything is difficult at the beginning, so let's start now.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report