In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following brings you a table about how to partition a large amount of data in MySQL. I'm sure you've read similar articles. What's the difference between what we bring to everyone? Let's take a look at the body. I'm sure you'll get something after reading how to partition tables that already have a lot of data in MySQL.
Environment:
Database version: 5.6
System environment: CentOS 6.8
Copy the schema:
Demand:
The table table of the Innodb engine with 25 million rows of data above instance C needs to be partitioned, but the table table above instance A does not make any changes.
Questions to think about:
If the table table above instance C is partitioned, will it affect the transfer of table table between instance An and instance C, will it cause data inconsistency or insertion failure, or will the data inserted after partitioning be slow?
Actual operation: all operations are performed on instance C.
1. Stop slave IO_THREAD stops IO_THREAD and waits for instance C to replay relay log.
Master_Log_File = = Relay_Master_Log_File and Read_Master_Log_Pos = = Exec_Master_Log_Pos when these two expressions hold, it indicates that the local relay log has been redone.
two。 Logically back up the data from table table:
Mysqldump-S / var/lib/mysql/mysql.sock-uroot-p-- single-transaction-- master-data=2-t-- skip-add-drop-table sbtest sbtest1 > sbtest1.sql parameter explanation:-t: do not create table-- skip-add-drop-table: do not do drop table operation
There is no need for drop table and create table operations to be written into the backup SQL statement during backup
3. Change the table name
Change the table name of the old table rename table sbtest1 to sbtest2; this is done so that there is no need to change the backup SQL statement during the backup import, and at least a backup of the table exists in case of partition failure or other reasons.
4. Create an empty table and partition it:
CREATE TABLE `sbtest1` (`id` bigint (20) NOT NULL AUTO_INCREMENT, `k` bigint (20) NOT NULL, `c` varchar (20) NOT NULL, `pad` varchar (50) COLLATE utf8mb4_bin NOT NULL, PRIMARY KEY (`id`) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin PARTITION BY KEY (id) PARTITIONS 64
The table structure of the new table should be the same as that of the old table, and the only difference is that it is partitioned in advance.
5. Import logical backup statement
Mysql-uroot-p sbtest < sbtest1.sql
6. Open slave after import and observe for a period of time
Start slave IO_THREAD
7.pt-table-checksum data consistency check (can be done or not)
Http://seanlook.com/2015/12/29/mysql_replica_pt-table-checksum/
8. Recommendations:
Because it takes a long time to back up 25 million rows of data, it is recommended to use screen when backing up, even if the process exists after you disconnect the remote call.
Do you think you want the table above about how to partition a large amount of data already in MySQL? If you want to know more about it, you can continue to follow our industry information section.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.