In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "how to recover data from MySQL/InnoDB data files". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn how to recover data from MySQL/InnoDB data files.
1. A brief introduction to the principle of recovery
Because of the more detailed description in the document, it is only briefly described here. All InnoDB data is organized indexed, and all data is stored in 16KB data blocks. The recovery process is divided into several steps, decomposing all data files into pages of a single 16KB size, starting to try to match according to the marked data starting point of each page, and outputting records if the match is considered to be successful according to the size defined by the given table.
two。 Parallel recovery
Data recovery is usually against the clock, the PDRTI tool itself is a basic tool, if you use this tool to do serial recovery, the time will be very long, through a simple shell script can make constraints_parser scripts work in parallel, which can greatly shorten the data recovery time. According to practical experience, the machine is slightly better, and the actual recovery time can be reduced to 1/20 of the serial. In other words, it used to take 40 hours, but maybe 2 hours in parallel.
The following are two parallel recovery scripts for reference:
#! / bin/bash ws=/u01/recovery pagedir=/u01/recovery/pages-1372436970/FIL_PAGE_INDEX logdir=/u01/recovery/log rectool=/u01/recovery/percona-data-recovery-tool-for-innodb-0.5/constraints_parser cd `in page $rectool`count=0 page_count=353894 page_done=0 startdate= `date +% s`date D1 in `ls $pagedir` do count=$ (($count+1)) echo "in page $d2 at dir $D1" > $logdir/$count.log thedate= `date +% s`date "$page_done / $page_count at $thedate from $startdate "total= `ls-l $pagedir/$d1/ | wc-l`page_done=$ (($page_done+$total)) threads= `ps axu | grep parser_jobs | grep-v grep | wc-l`echo $threads while [$threads-gt 48] Do sleep 1 threads= `ps axu | grep parser_jobs | grep-v grep | wc-l` done $ws/parser_jobs.sh $pagedir/$d1 > $ws/job.log 2 > & 1 & pagedir=/u01/recovery/pages-1372436970/FIL_PAGE_INDEX logdir=/u01/recovery/log rectool=/u01/recovery/percona-data-recovery-tool-for-innodb-0.5/constraints_parser logfile= "$logdir/ `basename $1`.log" echo "$1" > $logfile if [- d $1] Then for d2 in `ls $1` do $rectool-5-f $1/$d2 > > $logfile 2 > / dev/null done fi
3. Restore from index
If you know the index structure of the data table, if the data part is damaged, but the index part is complete, you can extract more field information in this way.
4. Problem handling in case of emergency
The technical summary of the kitchen mentioned that "it is wrong to stop MySQL for time to prevent the hard disk from continuing to write". Normally, if the process is not closed, the files opened by the process will not be overwritten, and the currently open files can be restored from the copy of the / proc file system (see: Recovering files from / Proc). If both the data file and the log file can be cp out, then hopefully let MySQL start itself and recover the current consistent data based on the transaction log.
At this point, I believe you have a deeper understanding of "how to recover data from MySQL/InnoDB data files". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.