In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you about how to maintain data consistency in MySql backup. The article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.
1. Directly copy all the files in the entire data directory to the new machine. The advantage is that it is simple, fast, and only needs to be copied; the disadvantage is also obvious that the new machine is completely unavailable during the backup process, and the destination cannot release the space waste caused by fragmentation in the source data file and cannot reclaim the extended innodb table space.
2. Use xtrabackup for hot backup. The advantage is that services can continue to be provided during the backup; similar to the first method, the destination partition cannot release space waste caused by fragmentation in the source data file and cannot reclaim the extended innodb table space that has occurred.
3. Redo it using the official mysqldump logic. The advantage is that services can be provided throughout the backup process, and the most important thing is that the waste of fragments can be solved.
I believe we are all familiar with the above methods, so we will not introduce them in detail. The following is mainly about how to maintain data consistency when backing up mysqldump.
Mysqldump has different internal implementations for different types of storage engines. It is mainly aimed at two types of storage engines: storage engines that support transactions (such as InnoDB) and storage engines that do not support transactions (such as MyISAM). Let's take a look at the implementation of these two storage engines:
1. For engines that support transactions, such as InnoDB, add-single-transaction to the parameters when backing up to ensure data consistency
-- single-transaction actually does the following two operations:
①, set the transaction isolation level of the session to repeatable read at the beginning
②, and then start a transaction (execute bigin), and end the transaction when the backup ends (execute commit)
With these two operations, the data read by the session during the backup is the data (at the same point) when the backup was started. It can be understood that with the addition of this parameter to the innodb engine, the data to be backed up has been set at the beginning of the backup, and the committed transactions during the backup process will not be seen and will not be backed up.
2. For engines that do not support transactions, such as MyISAM, data consistency can only be ensured by locking tables. There are three situations:
①, export full library: add-- lock-all-tables parameter, which starts a global read lock (execute flush tables with read lock) at the beginning of the backup, other session can read but cannot update the data, and the data does not change during the backup, so the final data must be exactly the same.
②, export a single library: add-- lock-tables parameter, which locks all tables of the library at the beginning of the backup. Other session can read but cannot update all tables of the library. The data of the library is consistent.
③, export a single table: add the-- lock-tables parameter, which locks the table at the beginning of the backup, other tables are not affected, and the data of the table is consistent.
The above only shows that the parameters added to different engines are just to keep the data consistent, but the business does not stop in the backup, and new data may be written all the time. In order to let us know which data is backed up or up to that pointer (binary log), we can add the-- master-data parameter again, and the backed-up sql file will record which pointer from the backup to which pointer. We can recover the data update after the pointer through the binary log.
# mysqldump-u root-p-- single-transaction-- master-data-- flush-log-- database test > test.sql-- >-- flush-log means that all lines after the backup starts are cut to the next binary log
You can see that the binary log information at the time of the backup is recorded in the first few lines of the backup test.sql file.
# vim test.sql--CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=436263492;---- Current Database: `test`.# mysqlbinlog-- start-position=436263492 mysql-bin.000004 > 00004.sql-- > after a full restore, we can restore it through the subsequent binary log
Also explain why tables should be locked during mysqldump backups to maintain data consistency:
Description:
1. Start unlocked table backup with mysqldump at T1 time point
2. It took 5 minutes to export table a. Because the table was not locked, table b insert 10 rows of data in these 5 minutes.
3. At the T2 time point, the export of table an is complete, and table b begins to be exported.
4. It takes 10 minutes to export table b, and there is no data change in table an and table b during the process of exporting table b.
5. At the point in time T3, the export of table b is completed and all backups are completed.
6. Then the slave starts to apply binlog from the binlog location of T1 time point. Finally, the data of table b in the slave is 10 rows more than that of the host, and the data is inconsistent.
This is how to maintain data consistency in the MySql backup shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.