Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use the InnoDB data recovery tool for MySQL database

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to use the MySQL database InnoDB data recovery tool, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

An open source MySQL database InnoDB data recovery tool: it recovers data from lost or corrupted MySQL tables by extracting table row records from the original data file. For example, when you accidentally execute DROP TABLE, TRUNCATE TABLE, or DROP DATABASE, you can recover data in the following ways

Before introducing the innodb-tools tool for data recovery, let's make the following clear:

1. This tool is only valid for InnoDB/XtraDB tables, but cannot restore MyISAM tables. (note: Percona claims to have a set of tools for restoring MyISAM tables, but I have not tried).

2. This tool is restored with saved MySQL data files instead of running MySQL Server.

3. There is no guarantee that data can always be recovered. For example, rewritten data cannot be recovered, in which case it may need to be recovered in a systematic or physical manner, which is outside the scope of this tool.

4. The best time to recover is to back up the MySQL data file as soon as you find that the data is missing.

5. Using this tool requires some manual work, which is not fully automatic.

6. The recovery process depends on your knowledge of the lost data, and you may need to choose between different versions of the data during the recovery process. So the more you know about your data, the more likely you are to recover.

Next, let's use an example to show how to recover through this tool.

1. prerequisite

First of all, it is important to understand that the innodb-tools tool does not recover data by connecting to an online database, but by copying data offline. Note: do not copy the InnoDB file directly while MySQL is running. This is not safe and will affect the data recovery process.

In order to complete the data recovery, you must know the table structure (column name, data type) to be restored. The easiest way is SHOW CREATE TABLE, and of course several alternatives will be introduced later. Therefore, if you have a MySQL server as a backup, even if the data is early or even not recorded in the table, it can be helpful to restore using the innodb-tools tool. But this is not necessary.

two。 Simple example

Mysql > TRUNCATE TABLE customer

3. Build tool

To build innodb-tools tools, you need to rely on C compilers, make tools, and so on.

1. Decompress the innodb-tools tool:

Wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/percona-data-recovery-tool-for-innodb-0.5.tar.gz

Tar-zxvf percona-data-recovery-tool-for-innodb-0.5.tar.gz

2. Enter the mysql-source directory under the decompressed root directory, and run the configuration command (Note: do not run the make command):

Cd percona-data-recovery-tool-for-innodb-0.5/mysql-source

. / configure

3. After completing the configuration steps, go back to the decompressed root directory, run the make command, and compile and generate page_parser and constraints_parser tools:

Cd..

Make

The page_parser tool will parse the page and row structure of the table based on the underlying implementation of InnoDB. The constraints_parser tool is not in use for the time being, and it needs to be recompiled and generated later after the table structure is defined.

If there is a problem with the compilation process, click here. There are no problems in the use of this article, so I will not enumerate them one by one.

4. Extract the required pages

The default size of InnoDB pages is 16K, and each page belongs to a specific index in a particular table. The page_parser tool copies each page to a separate file according to the index ID in the header by reading the data file.

If your MySQL server is configured as innodb_file_per_table=1, then the system has helped you with the above process. All the required pages are in the .ibd file, and usually you don't need to split it again. However, if the .ibd file may contain multiple index, it is necessary to split the page separately. If MySQL server is not configured with innodb_file_per_table, the data is saved in a global table namespace (usually a file named ibdata1, as is the case in this article), and the file needs to be split by page.

4.1 Paging

Run the page_parser tool to split:

If MySQL is a version prior to 5.0 and InnoDB is in REDUNDANT format, run the following command:

. / page_parser-4-f / path/to/ibdata1

If MySQL is version 5.0 and InnoDB is in COMPACT format, run the following command:

. / page_parser-5-f / path/to/ibdata1

After running, the page_parser tool creates a directory of pages-, where TIMESTAMP is the UNIX system timestamp. Under this directory, create a subdirectory for each index ID with the index ID of the page. For example:

Pages-1330842944/FIL_PAGE_INDEX/0-1Accord 1-00000008.page

Pages-1330842944/FIL_PAGE_INDEX/0-1ax 6-00000008.page

4.2 Select the required Index ID

In general, we need to restore based on the primary key (PRIMARY index) of the table, which contains all the rows. Here are some steps that can be achieved:

If the table is still running and the table has not been dropped by drop, you can start InnoDB Tablespace Monitor and output all tables and error log files from indexes,index IDs to MySQL server. Create an innodb_table_ monitor table to collect how the innodb storage engine table and its indexes are stored:

CREATE TABLE innodb_table_monitor (id int) ENGINE=InnoDB

If the innodb_table_monitor already exists, the drop table then re-create the table. After the MySQL error log is output, you can drop this table to stop printout for more monitoring. An example of an output is as follows:

TABLE: name sakila/customer, id 0 142, columns 13, indexes 4, appr.rows 0

COLUMNS: customer_id: DATA_INT len 2 prec 0; store_id: DATA_INT len 1 prec 0; first_name: type 12 len 135 prec 0; last_name: type 12 len 135 prec 0; email:

Type 12 len 150 prec 0; address_id: DATA_INT len 2 prec 0; active: DATA_INT len 1 prec 0; create_date: DATA_INT len 8 prec 0; last_update: DATA_INT len 4 pr

Ec 0; DB_ROW_ID: DATA_SYS prtype 256 len 6 prec 0; DB_TRX_ID: DATA_SYS prtype 257 len 6 prec 0; DB_ROLL_PTR: DATA_SYS prtype 258 len 7 prec 0

INDEX: name PRIMARY, id 0 286, fields 1/11, type 3

Root page 50, appr.key vals 0, leaf pages 1, size pages 1

FIELDS: customer_id DB_TRX_ID DB_ROLL_PTR store_id first_name last_name email address_id active create_date last_update

INDEX: name idx_fk_store_id, id 0 287, fields 1/2, type 0

Root page 56, appr.key vals 0, leaf pages 1, size pages 1

FIELDS: store_id customer_id

INDEX: name idx_fk_address_id, id 0 288, fields 1/2, type 0

Root page 63, appr.key vals 0, leaf pages 1, size pages 1

FIELDS: address_id customer_id

INDEX: name idx_last_name, id 0 289, fields 1/2, type 0

Root page 1493, appr.key vals 0, leaf pages 1, size pages 1

FIELDS: last_name customer_id

Here, we restore the customer table under the sakila library, from which the primary key information can be obtained:

INDEX: name PRIMARY, id 0 286, fields 1/11, type 3

Index ID is 0 256, so the InnoDB page we need to recover is in the 0-256 subdirectory.

Note: the above method of obtaining the index ID of the table is described in the original document. In practice, this paper adopts a simpler way, that is, directly restore all the InnoDB pages generated by page_parser. Practice has proved that this method is also feasible:)

5. Generate table definition

In step 4, now that we have found the data we need, we need to find the table structure, create the table definition, compile it into constraints_parser, and then use this tool to extract the rows from the table from the InnoDB page.

The table definition contains the columns, column order, and data types in the table. If MySQL server is still running and the table has not been dropped by drop, then simple and practical SHOW CREATE TABLE can collect this information. Next, you will use this table structure information to create a table definition identified by the C structure, which is then compiled into the constraints_parser tool. The definition of the C structure is stored in include/table_defs.h.

The easiest way is a create_defs.pl Perl script that connects to MySQL server, reads the results of SHOW CREATE TABLE, and outputs the generated table definition to standard output. Here is an example where the result is redirected directly to include/table_defs.h:

If possible, the easiest way to create the table definition is with the create_defs.pl Perl script. It connects to the MySQL server and reads SHOW CREATE TABLE output, and prints the generated definition to its standard output. Here is an example:

$. / create_defs.pl-host=localhost-user=root-password=123456-db=sakila-table=customer > include/table_defs.h

The following is the table structure in the example:

CREATE TABLE `customer` (

`customer_ id` smallint (5) UNSIGNED NOT NULL AUTO_INCREMENT

`store_ id` tinyint (3) UNSIGNED NOT NULL

`first_ name` varchar (45) NOT NULL

`last_ name` varchar (45) NOT NULL

`email` varchar (50) DEFAULT NULL

`address_ id` smallint (5) UNSIGNED NOT NULL

`active`tinyint (1) NOT NULL DEFAULT'1'

`create_ date` datetime NOT NULL

`last_ update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP

PRIMARY KEY (`customer_ id`)

KEY `idx_fk_store_ id` (`store_ id`)

KEY `idx_fk_address_ id` (`address_ id`)

KEY `idx_last_ name` (`last_ name`)

CONSTRAINT `fk_customer_ address` FOREIGN KEY (`address_ id`) REFERENCES `address` (`address_ id`) ON UPDATE CASCADE

CONSTRAINT `fk_customer_ store` FOREIGN KEY (`store_ id`) REFERENCES `store` (`store_ id`) ON UPDATE CASCADE

) ENGINE=InnoDB DEFAULT CHARSET=utf8

The following is the generated table definition:

# ifndef table_defs_h

# define table_defs_h

/ / Table definitions

Table_def_t table_definitions [] = {

{

Name: "customer"

{

{/ * smallint (5) unsigned * /

Name: "customer_id"

Type: FT_UINT

Fixed_length: 2

Has_limits: TRUE

Limits: {

Can_be_null: FALSE

Uint_min_val: 0

Uint_max_val: 65535

}

Can_be_null: FALSE

}

{/ * Innodb's internally used field * /

Name: "DB_TRX_ID"

Type: FT_INTERNAL

Fixed_length: 6

Can_be_null: FALSE

}

{/ * Innodb's internally used field * /

Name: "DB_ROLL_PTR"

Type: FT_INTERNAL

Fixed_length: 7

Can_be_null: FALSE

}

{/ * tinyint (3) unsigned * /

Name: "store_id"

Type: FT_UINT

Fixed_length: 1

Has_limits: TRUE

Limits: {

Can_be_null: FALSE

Uint_min_val: 0

Uint_max_val: 255

}

Can_be_null: FALSE

}

{/ * varchar (45) * /

Name: "first_name"

Type: FT_CHAR

Min_length: 0

Max_length: 45

Has_limits: TRUE

Limits: {

Can_be_null: FALSE

Char_min_len: 0

Char_max_len: 45

Char_ascii_only: TRUE

}

Can_be_null: FALSE

}

{/ * varchar (45) * /

Name: "last_name"

Type: FT_CHAR

Min_length: 0

Max_length: 45

Has_limits: TRUE

Limits: {

Can_be_null: FALSE

Char_min_len: 0

Char_max_len: 45

Char_ascii_only: TRUE

}

Can_be_null: FALSE

}

{/ * varchar (50) * /

Name: "email"

Type: FT_CHAR

Min_length: 0

Max_length: 50

Has_limits: TRUE

Limits: {

Can_be_null: TRUE

Char_min_len: 0

Char_max_len: 50

Char_ascii_only: TRUE

}

Can_be_null: TRUE

}

{/ * smallint (5) unsigned * /

Name: "address_id"

Type: FT_UINT

Fixed_length: 2

Has_limits: TRUE

Limits: {

Can_be_null: FALSE

Uint_min_val: 0

Uint_max_val: 65535

}

Can_be_null: FALSE

}

{/ * tinyint (1) * /

Name: "active"

Type: FT_INT

Fixed_length: 1

Can_be_null: FALSE

}

{/ * datetime * /

Name: "create_date"

Type: FT_DATETIME

Fixed_length: 8

Can_be_null: FALSE

}

{/ * timestamp * /

Name: "last_update"

Type: FT_UINT

Fixed_length: 4

Can_be_null: FALSE

}

{type: FT_NONE}

}

}

}

# endif

If necessary, you can edit and modify the include/table_defs.h; as needed, and then recompile the constraints_parser tool according to include/table_defs.h:

$make

Gcc-DHAVE_OFFSET64_T-D_FILE_OFFSET_BITS=64-D_LARGEFILE64_SOURCE=1-D_LARGEFILE_SOURCE=1-g-I include-I mysql-source/include-I mysql-source/innobase/include-c tables_dict.c-o lib/tables_dict.o

Gcc-DHAVE_OFFSET64_T-D_FILE_OFFSET_BITS=64-D_LARGEFILE64_SOURCE=1-D_LARGEFILE_SOURCE=1-g-I include-I mysql-source/include-I mysql-source/innobase/include-o constraints_parser constraints_parser.c lib/tables_dict.o lib/print_data.o lib/check_data.o lib/libut.a lib/libmystrings.a

Gcc-DHAVE_OFFSET64_T-D_FILE_OFFSET_BITS=64-D_LARGEFILE64_SOURCE=1-D_LARGEFILE_SOURCE=1-g-I include-I mysql-source/include-I mysql-source/innobase/include-o page_parser page_parser.c lib/tables_dict.o lib/libut.a

6. Extract row records from the page

6.1 merge pages into one file

As mentioned earlier, we need to restore index ID 0286, and the page containing the data is located in the pages-1246363747/0-286 / directory.

Total 120

-rw-r--r-- 1 root root 16384 Jun 30 05:09 1254-00001254.page

-rw-r--r-- 1 root root 16384 Jun 30 05:09 1255-00001255.page

-rw-r--r-- 1 root root 16384 Jun 30 05:09 1256-00001256.page

-rw-r--r-- 1 root root 16384 Jun 30 05:09 1257-00001257.page

-rw-r--r-- 1 root root 16384 Jun 30 05:09 50-00000050.page

-rw-r--r-- 1 root root 16384 Jun 30 05:09 74-00000050.page

Enter the following command to merge pages:

$find pages-1246363747/0-286 /-type f-name'* .page'| sort-n | xargs cat > pages-1246363747/0-286/customer_pages_concatenated

The resulting result file, pages-1246363747/0-286/customer_pages_concatenated, will be used as input to the constraints_parser tool.

6.2 run the constraints_parser tool

Let's move on to the core step of recovering data-- running the constraints_parser tool to extract row records. Like the page_parser tool, you need to specify the InnoDB page format (COMPACT/REDUNDANT) with the-5 or-4 arguments, and-f to specify the input file.

Going back to the example, we can run the constraints_parser tool like this (the following command is to restore a single page, or you can directly restore the files that have merged all the pages after step 6.1):

$. / constraints_parser-5-f pages-1246363747/0-286 pages-1246363747/0 50-00000050.page

Each row in the output contains the table name and each column in the table. Note: there may be correct or incorrect row records. This section of the official documentation shows how to adjust the table definition to get as much valid data as possible while filtering out junk lines, which are not described in detail here.

Customer 0 120 "" 32770 0 "0000-00-0000: 12:80" 0

Customer 00 "00" 9120-22-48 29:44:00 "2

Customer 61953 0 "" 2816 0 "7952-32-67 11:43:49" 0

Customer 00 "" 00 "0000-00-0000: 00:00" 0

... Snip...

Customer 00 "" 00 "0000-00-0000: 00:00" 16777728

Customer 28262 "NULL 25965" 4603-91-96 76:21:28 "5111809

Customer 0 82 "22867 77" 2775-94-58 03:19:18 "1397573972

Customer 2 1 "PATRICIA"JOHNSON"PATRICIA.JOHNSON@sakilacustomer.org" 6 1 "2006-02-14 22:04:36" 1140008240

Customer 3 1 "LINDA", "WILLIAMS", "LINDA.WILLIAMS@sakilacustomer.org" 7 1 "2006-02-14 22:04:36" 1140008240

Customer 4 2 "BARBARA"JONES"BARBARA.JONES@sakilacustomer.org" 8 1 "2006-02-14 22:04:36" 1140008240

Customer 5 1 "ELIZABETH"BROWN"ELIZABETH.BROWN@sakilacustomer.org" 9 1 "2006-02-14 22:04:36" 1140008240

Customer 6 2 "JENNIFER"DAVIS"JENNIFER.DAVIS@sakilacustomer.org" 10 1 "2006-02-14 22:04:36" 1140008240

Customer 7 1 "MARIA"MILLER"MARIA.MILLER@sakilacustomer.org" 11 1 "2006-02-14 22:04:36" 1140008240

Customer 82 "SUSAN"WILSON"SUSAN.WILSON@sakilacustomer.org" 12 1 "2006-02-14 22:04:36" 1140008240

Customer 9 2 "MARGARET"MOORE"MARGARET.MOORE@sakilacustomer.org" 13 1 "2006-02-14 22:04:36" 1140008240

... Snip...

Customer 00 "" 00 "0000-00-0000: 00:00" 0

Customer 0 "" 0 "7679-35-98 86:44:53" 720578985

7. Import data into the database

Finally, in order to complete the data recovery, you need to import the output of the constraints_parser tool in step 6 into the database using the LOAD DATA INFILE command. The command is as follows:

LOAD DATA INFILE'/ tmp/customer_data.tsv'

REPLACE INTO TABLE customer

FIELDS TERMINATED BY'\ t'

OPTIONALLY ENCLOSED BY'"

LINES STARTING BY 'customer\ t'

(customer_id, store_id, first_name, last_name, email

Address_id, active, create_date, @ last_update)

SET last_update = FROM_UNIXTIME (@ last_update);

The above is all the contents of the article "how to use the InnoDB data recovery tool for MySQL Database". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report