In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "mysql no backup recovery to create independent tablespaces", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor to take you to learn "mysql no backup recovery to create independent tablespaces"!
Create a recovery library
Click (here) to collapse or open
Mysql > create database helpdb default charset utf8
Query OK, 1 row affected (0.01sec)
Create a recovery table
Click (here) to collapse or open
Mysql >
CREATE TABLE `newbuttt` (
`id`int (11) NOT NULL AUTO_INCREMENT
`username` varchar (50) DEFAULT NULL
`userpwd` varchar (50) DEFAULT NULL
`createtime` datetime DEFAULT NULL
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Create a corresponding function
Click (here) to collapse or open
DELIMITER $$
USE `helpdb` $$
DROP FUNCTION IF EXISTS `rand_ string` $$
CREATE DEFINER= `root` @ `% `FUNCTION `root` (n INT) RETURNS VARCHAR
BEGIN
DECLARE chars_str VARCHAR DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
DECLARE return_str VARCHAR (255) DEFAULT''
DECLARE i INT DEFAULT 0
WHILE i
< n DO SET return_str = CONCAT(return_str,SUBSTRING(chars_str , FLOOR(1 + RAND()*62 ),1)); SET i = i +1; END WHILE; RETURN return_str; END$$ DELIMITER ; 创建存储过程 点击(此处)折叠或打开 DELIMITER $$ USE `helpdb` $$ DROP PROCEDURE IF EXISTS `p_repeat` $$ CREATE DEFINER = `root` @`%` PROCEDURE `p_repeat` () BEGIN DECLARE v INT ; SET v = 0 ; REPEAT INSERT INTO newaccount VALUES ( NULL, rand_string (5), rand_string (15), NOW() ) ; SET v = v + 1 ; UNTIL v >= 1000
END REPEAT
END $$
DELIMITER
Call mysql stored procedure to insert data
Click (here) to collapse or open
Mysql > CALL p_repeat
Query OK, 1 row affected (1.62 sec)
Mysql > select count (*) from newaccount
+-+
| | count (*) |
+-+
| | 1000 |
+-+
Insert data manually
Click (here) to collapse or open
Mysql > insert into newaccount values (null,'netdata','pwdnetdata',now ())
Query OK, 1 row affected (0.00 sec)
Mysql > select * from newaccount where id=1001
+-+
| | id | username | userpwd | createtime | |
+-+
| | 1001 | netdata | pwdnetdata | 2017-11-12 01:37:52 |
+-+
1 row in set (0.00 sec)
Delete tabl
Click (here) to collapse or open
Mysql > drop table newaccount
Query OK, 0 rows affected (0.01 sec)
Close the library
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] # / etc/init.d/mysql stop
Shutting down MySQL.. SUCCESS!
[root@mysqltest-213-2 undrop-for-innodb-master] #
1 row in set (0.00 sec)
[root@mysqltest-213-2 undrop-for-innodb-master] #. / stream_parser-f / var/lib/mysql/ibdata1
Opening file: / var/lib/mysql/ibdata1
File information:
ID of device containing file: 64768
Inode number: 393238
Protection: 100660 (regular file)
Number of hard links: 1
User ID of owner: 498
Group ID of owner: 500
Device ID (if special file): 0
Blocksize for filesystem I/O: 4096
Number of blocks allocated: 155648
Time of last access: 1510422066 Sun Nov 12 01:41:06 2017
Time of last modification: 1510422066 Sun Nov 12 01:41:06 2017
Time of last status change: 1510422066 Sun Nov 12 01:41:06 2017
Total size, in bytes: 79691776 (76.000 MiB)
Size to process: 79691776 (76.000 MiB)
Worker (0): 21.03 done. 2017-11-12 01:41:49 ETA (in 00:00:07) Processing speed: 7.984 MiB/sec
All workers finished in 1 sec
Parse the data file to get table_id=228
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] #. / c_parser-4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page-t dictionary/SYS_TABLES.sql | grep newaccount
00000007A50E 030000024B0D54 SYS_TABLES "helpdb/newaccount" 71 4 10 80 "57
00000007A50E 030000024B0D54 SYS_TABLES "helpdb/newaccount" 71 4 10 80 "57
SET FOREIGN_KEY_CHECKS=0
LOAD DATA LOCAL INFILE'/ usr/local/tools/undrop-for-innodb-master/dumps/default/SYS_TABLES' REPLACE INTO TABLE `SYS_ TABLES` FIELDS TERMINATED BY'\ t'OPTIONALLY ENCLOSED BY'"LINES STARTING BY 'SYS_TABLES\ t' (`NAME`, `ID`, `N_ COLS`, `TYPE`, `MIX_ ID`, `MIX_ LEN`, `CLUSTER_ NAME`, `SPACE`)
Get the corresponding recovery data file of index_id,index_id according to table_id
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] #. / c_parser-4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page-t dictionary/SYS_INDEXES.sql | grep 71
SET FOREIGN_KEY_CHECKS=0
LOAD DATA LOCAL INFILE'/ usr/local/tools/undrop-for-innodb-master/dumps/default/SYS_INDEXES' REPLACE INTO TABLE `SYS_ indexes` FIELDS TERMINATED BY'\ t 'OPTIONALLY ENCLOSED BY' "LINES STARTING BY 'SYS_INDEXES\ t' (`TYPE`, `SPACE`, `PAGE_ No`)
00000007A50E 030000024B0BFF SYS_INDEXES 71 74 "PRIMARY" 1 3 57 4294967295
00000007A50E 030000024B0BFF SYS_INDEXES 71 74 "PRIMARY" 1 3 57 4294967295
Note that the logical volume should also be scanned again, which is different from the shared tablespace recovery.
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] #. / stream_parser-f / dev/mapper/vg_mysqltest2132-lv_root-t 18G
Opening file: / dev/mapper/vg_mysqltest2132-lv_root
File information:
ID of device containing file: 5
Inode number: 6976
Protection: 60660 (block device)
Number of hard links: 1
User ID of owner: 0
Group ID of owner: 6
Device ID (if special file): 64768
Blocksize for filesystem I/O: 4096
Number of blocks allocated: 0
Time of last access: 1510407590 Sat Nov 11 21:39:50 2017
Time of last modification: 1510407590 Sat Nov 11 21:39:50 2017
Time of last status change: 1510407590 Sat Nov 11 21:39:50 2017
Total size, in bytes: 0 (0.000 exp (+ 0))
Size to process: 19327352832 (18.000 GiB)
Worker (0): 1.08 done. 2017-11-12 01:46:39 ETA (in 00:03:10) Processing speed: 95.945 MiB/sec
Worker (0): 2.12% done. 2017-11-12 01:48:14 ETA (in 00:04:42) Processing speed: 63.880 MiB/sec
Worker (0): 3.16% done. 2017-11-12 01:54:30 ETA (in 00:10:51) Processing speed: 27.377 MiB/sec
Worker (0): 4.20% done. 2017-11-12 01:46:45 ETA (in 00:03:04) Processing speed: 95.844 MiB/sec
Worker (0): 5.24% done. 2017-11-12 01:46:45 ETA (in 00:03:02) Processing speed: 95.820 MiB/sec
Worker (0): 6.28% done. 2017-11-12 01:48:16 ETA (in 00:04:30) Processing speed: 63.880 MiB/sec
Worker (0): 7.32% done. 2017-11-12 01:48:16 ETA (in 00:04:27) Processing speed: 63.896 MiB/sec
Worker (0): 8.36% done. 2017-11-12 01:46:47 ETA (in 00:02:56) Processing speed: 95.828 MiB/sec
Worker (0): 9.40% done. 2017-11-12 01:46:47 ETA (in 00:02:54) Processing speed: 95.906 MiB/sec
Worker (0): 10.44% done. 2017-11-12 01:46:47 ETA (in 00:02:52) Processing speed: 95.891 MiB/sec
Worker (0): 11.49% done. 2017-11-12 01:45:20 ETA (in 00:01:24) Processing speed: 192.000 MiB/sec
Worker (0): 12.53% done. 2017-11-12 01:46:46 ETA (in 00:02:48) Processing speed: 95.883 MiB/sec
Worker (0): 13.57% done. 2017-11-12 01:45:22 ETA (in 00:01:23) Processing speed: 191.938 MiB/sec
Worker (0): 15.09% done. 2017-11-12 01:44:55 ETA (in 00:00:55) Processing speed: 280.000 MiB/sec
Worker (0): 17.13% done. 2017-11-12 01:44:41 ETA (in 00:00:40) Processing speed: 376.000 MiB/sec
Worker (0): 19.04% done. 2017-11-12 01:44:44 ETA (in 00:00:42) Processing speed: 352.000 MiB/sec
Worker (0): 20.99% done. 2017-11-12 01:44:43 ETA (in 00:00:40) Processing speed: 360.000 MiB/sec
Worker (0): 22.90% done. 2017-11-12 01:44:44 ETA (in 00:00:40) Processing speed: 352.000 MiB/sec
Worker (0): 24.81% done. 2017-11-12 01:44:44 ETA (in 00:00:39) Processing speed: 352.000 MiB/sec
Worker (0): 26.46% done. 2017-11-12 01:44:50 ETA (in 00:00:44) Processing speed: 304.000 MiB/sec
Worker (0): 28.54% done. 2017-11-12 01:44:41 ETA (in 00:00:34) Processing speed: 384.000 MiB/sec
Worker (0): 30.54% done. 2017-11-12 01:44:42 ETA (in 00:00:34) Processing speed: 368.000 MiB/sec
Worker (0): 32.45% done. 2017-11-12 01:44:44 ETA (in 00:00:35) Processing speed: 352.000 MiB/sec
Worker (0): 34.05% done. 2017-11-12 01:44:51 ETA (in 00:00:41) Processing speed: 296.000 MiB/sec
Worker (0): 35.49% done. 2017-11-12 01:44:56 ETA (in 00:00:45) Processing speed: 263.930 MiB/sec
Worker (0): 37.44% done. 2017-11-12 01:44:44 ETA (in 00:00:32) Processing speed: 360.000 MiB/sec
Worker (0): 39.39% done. 2017-11-12 01:44:44 ETA (in 00:00:31) Processing speed: 360.000 MiB/sec
Worker (0): 40.95% done. 2017-11-12 01:44:51 ETA (in 00:00:37) Processing speed: 288.000 MiB/sec
Worker (0): 42.95% done. 2017-11-12 01:44:43 ETA (in 00:00:28) Processing speed: 368.000 MiB/sec
Worker (0): 44.86% done. 2017-11-12 01:44:44 ETA (in 00:00:28) Processing speed: 352.000 MiB/sec
Worker (0): 45.90% done. 2017-11-12 01:45:09 ETA (in 00:00:52) Processing speed: 191.750 MiB/sec
Worker (0): 46.94% done. 2017-11-12 01:45:08 ETA (in 00:00:50) Processing speed: 192.000 MiB/sec
Worker (0): 48.90% done. 2017-11-12 01:44:45 ETA (in 00:00:26) Processing speed: 360.000 MiB/sec
Worker (0): 50.89% done. 2017-11-12 01:44:44 ETA (in 00:00:24) Processing speed: 368.000 MiB/sec
Worker (0): 52.80% done. 2017-11-12 01:44:45 ETA (in 00:00:24) Processing speed: 352.000 MiB/sec
Worker (0): 54.71% done. 2017-11-12 01:44:45 ETA (in 00:00:23) Processing speed: 352.000 MiB/sec
Worker (0): 56.27% done. 2017-11-12 01:44:50 ETA (in 00:00:27) Processing speed: 288.000 MiB/sec
Worker (0): 57.66% done. 2017-11-12 01:44:54 ETA (in 00:00:30) Processing speed: 255.937 MiB/sec
Worker (0): 59.70% done. 2017-11-12 01:44:44 ETA (in 00:00:19) Processing speed: 376.000 MiB/sec
Worker (0): 61.66% done. 2017-11-12 01:44:45 ETA (in 00:00:19) Processing speed: 360.000 MiB/sec
Worker (0): 63.61% done. 2017-11-12 01:44:45 ETA (in 00:00:18) Processing speed: 360.000 MiB/sec
Worker (0): 65.69% done. 2017-11-12 01:44:44 ETA (in 00:00:16) Processing speed: 384.000 MiB/sec
Worker (0): 67.34% done. 2017-11-12 01:44:48 ETA (in 00:00:19) Processing speed: 304.000 MiB/sec
Worker (0): 68.38% done. 2017-11-12 01:45:00 ETA (in 00:00:30) Processing speed: 191.922 MiB/sec
Worker (0): 69.95% done. 2017-11-12 01:44:50 ETA (in 00:00:19) Processing speed: 288.000 MiB/sec
Worker (0): 71.94% done. 2017-11-12 01:44:46 ETA (in 00:00:14) Processing speed: 368.000 MiB/sec
Worker (0): 73.98% done. 2017-11-12 01:44:45 ETA (in 00:00:12) Processing speed: 376.000 MiB/sec
Worker (0): 76.02% done. 2017-11-12 01:44:45 ETA (in 00:00:11) Processing speed: 376.000 MiB/sec
Worker (0): 77.76% done. 2017-11-12 01:44:47 ETA (in 00:00:12) Processing speed: 320.000 MiB/sec
Worker (0): 78.80% done. 2017-11-12 01:44:56 ETA (in 00:00:20) Processing speed: 191.891 MiB/sec
Worker (0): 79.84% done. 2017-11-12 01:45:16 ETA (in 00:00:38) Processing speed: 95.813 MiB/sec
Worker (0): 80.88% done. 2017-11-12 01:44:57 ETA (in 00:00:18) Processing speed: 191.891 MiB/sec
Worker (0): 81.92% done. 2017-11-12 01:44:57 ETA (in 00:00:17) Processing speed: 192.000 MiB/sec
Worker (0): 83.31% done. 2017-11-12 01:44:53 ETA (in 00:00:12) Processing speed: 256.000 MiB/sec
Worker (0): 85.35% done. 2017-11-12 01:44:49 ETA (in 00:00:07) Processing speed: 376.000 MiB/sec
Worker (0): 86.91% done. 2017-11-12 01:44:51 ETA (in 00:00:08) Processing speed: 288.000 MiB/sec
Worker (0): 88.69% done. 2017-11-12 01:44:50 ETA (in 00:00:06) Processing speed: 328.000 MiB/sec
Worker (0): 89.73% done. 2017-11-12 01:44:54 ETA (in 00:00:09) Processing speed: 191.939 MiB/sec
Worker (0): 90.77% done. 2017-11-12 01:44:54 ETA (in 00:00:08) Processing speed: 191.883 MiB/sec
Worker (0): 92.51% done. 2017-11-12 01:44:51 ETA (in 00:00:04) Processing speed: 320.000 MiB/sec
Worker (0): 93.55% done. 2017-11-12 01:44:54 ETA (in 00:00:06) Processing speed: 192.000 MiB/sec
Worker (0): 95.29% done. 2017-11-12 01:44:51 ETA (in 00:00:02) Processing speed: 320.000 MiB/sec
Worker (0): 97.11% done. 2017-11-12 01:44:51 ETA (in 00:00:01) Processing speed: 336.000 MiB/sec
All workers finished in 83 sec
View the corresponding data page file
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] # ls pages-vg_mysqltest2132-lv_root/FIL_PAGE_INDEX/0000000000000074.page
Pages-vg_mysqltest2132-lv_root/FIL_PAGE_INDEX/0000000000000074.page
Restore the table structure
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] # cat newaccount.sql
CREATE TABLE `newbuttt` (
`id`int (11) NOT NULL AUTO_INCREMENT
`username` varchar (50) DEFAULT NULL
`userpwd` varchar (50) DEFAULT NULL
`createtime` datetime DEFAULT NULL
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Extract data
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] #. / c_parser-6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000074.page-t newaccount.sql | head-5
-- Page id: 5, Format: COMPACT, Records list: Valid, Expected records: (301 301)
00000007A1B5 AC0000015F0110 newaccount 151" 0GdUH "" DTo9njhkAO9adoc "" 2017-11-1201 purl 36purl 10.0 "
00000007A1B6 AD000001600110 newaccount 152" CoT8Q "" DtjZQ4Iaz9UVKOD "" 2017-11-1201purl 36purl 10.0 "
00000007A1B7 AE000001610110 newaccount 153 "CZWzT"z1f1aEyGzEnLzo7"2017-11-1201 purl 3615"
00000007A1B8 AF000001620110 newaccount 154" eEpWh "" p50DYNW9J41Hkkv "" 2017-11-1201purl 36 "
Recover data
Convert extracted data into files
Click (here) to collapse or open
[root@mysqltest-213-2 undrop-for-innodb-master] #. / c_parser-6f pages-vg_mysqltest2132-lv_root/FIL_PAGE_INDEX/0000000000000074.page-t newaccount.sql-o dumps/default/newaccount.dump-l dumps/default/newaccount.sql
[root@mysqltest-213-2 undrop-for-innodb-master] # ls-alh dumps/default/newaccount.*
-rw-r--r--. 1 root root 92K Nov 12 01:48 dumps/default/newaccount.dump
-rw-r--r--. 1 root root 244 Nov 12 01:48 dumps/default/newaccount.sql
Recover data
Click (here) to collapse or open
Mysql > source dumps/default/newaccount.sql
Query OK, 0 rows affected (0.00 sec)
Query OK, 6002 rows affected (0.06 sec)
Records: 6002 Deleted: 0 Skipped: 0 Warnings: 0
Mysql > select count (*) from newaccount
+-+
| | count (*) |
+-+
| | 1001 |
+-+
1 row in set (0.00 sec)
Mysql > checksum table newaccount
+-+ +
| | Table | Checksum |
+-+ +
| | helpdb.newaccount | 2512700176 | |
+-+ +
1 row in set (0.03 sec)
At this point, I believe you have a deeper understanding of "mysql no backup recovery to create independent tablespaces". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.