In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
MySQL in how to effectively delete a large table, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
How to effectively delete a large table in MySQL?
During the DROP TABLE process, all operations are occupied by HANG.
This is because INNODB maintains a global exclusive lock (on top of table cache) and is not released until the DROP TABLE is complete.
In our commonly used ext3,ext4,ntfs file system, it takes some time to delete a large file (tens or even hundreds of gigabytes).
Let's introduce a method of fast DROP table; no matter how big the table is, INNODB can return quickly, and the table deletion is complete.
Implementation: skillful use of LINK (hard link)
Measured:
Root@127.0.0.1: test 21:38:00 > show table status like 'tt'\ G
* * 1. Row *
Name: tt
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 151789128
Avg_row_length: 72
Data_length: 11011096576
Max_data_length: 0
Index_length: 5206179840
Data_free: 7340032
Auto_increment: NULL
Create_time: 2011-05-18 14:55:08
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.22 sec)
Root@127.0.0.1: test 21:39:34 > drop table tt
Query OK, 0 rows affected (25.01 sec)
It takes about 25 seconds to delete an 11G table (different hardware, different time)
Let's delete another larger table.
But before that, we need to make a hard join to the data file of this table:
Root@ # ln stock.ibd stock.id.hdlk
Root@ # ls stock.*-l
-rw-rw-- 1 MySQL mysql 9196 Apr 14 23:03 stock.frm
-rw-r-r- 2 mysql mysql 19096666112 Apr 15 09:55 stock.ibd
-rw-r-r- 2 mysql mysql 19096666112 Apr 15 09:55 stock.id.hdlk
You will find that the INODES property of stock.ibd becomes 2.
Next, let's continue to delete the table.
Root@127.0.0.1: test 21:44:37 > show table status like 'stock'\ G
* * 1. Row *
Name: stock
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 49916863
Avg_row_length: 356
Data_length: 17799577600
Max_data_length: 0
Index_length: 1025507328
Data_free: 4194304
Auto_increment: NULL
Create_time: 2011-05-18 14:55:08
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.23 sec)
Root@127.0.0.1: test 21:39:34 > drop table stock
Query OK, 0 rows affected (0.99 sec)
The deletion is completed in less than 1 second; that is, DROP TABLE does not have to HANG for so long.
But the table is deleted and the data file is still there, so you still need to delete the final data file.
Root # ll
Total 19096666112
-rw-r-r- 2 mysql mysql 19096666112 Apr 15 09:55 stock.id.hdlk
Root # rm stock.id.hdlk
Although DROP TABLE took a few more steps. (if you have a reliable self-running program (which automatically establishes hard links for large tables and automatically deletes expired hard-link files), it will be less tedious.)
This will greatly reduce the length of MYSQL HANG's stay; I'm sure it's worth it.
As for the principle: it is to use the principle of OS HARD LINK
When multiple filenames point to the same INODE at the same time, the number of references to the INODE is N > 1, and deleting any one filename will be quick.
Because its direct physical file block has not been deleted. Just deleted a pointer.
When the reference number of INODE is Number1, deleting a file needs to clear all data blocks related to the file, so it will be time-consuming
[hidden trouble]
Due to the ever-changing business requirements, there may be large tables in DB that take up space or affect performance; the processing of these tables can easily lead to a sharp decline in MySQL performance, serious occupation of IO performance, and so on. Previously, drop table caused the service to be unavailable in the production library; large rm files caused io to run full, causing applications to recover from disasters; and the softer the operation on large tables, the better.
[solution]
1. Reduce mysql DDL time through hard links and speed up lock release
two。 Delete files in segments through truncate to avoid IO hang
[production case]
For a pair of mysql master / slave, insufficient space is found when the master database writes too much, so it is necessary to urgently clean up the discarded large table, but it cannot affect the application access response:
$ll / u01/mysql/data/test/tmp_large.ibd
-rw-r-- 1 mysql dba 289591525376 Mar 30 2012 tmp_large.ibd
The process of deleting and changing large tables in 270GB is as follows:
# (make grayscale first)
Ln tmp_large.ibd / u01/bak/tmp_tbl.ibd # establish hard links
-rw-r-- 2 mysql dba 289591525376 Mar 30 2012 tmp_large.ibd
Set session sql_log_bin=0
# not counting bin log to save performance and prevent inconsistencies between master and backup
Desc test.tmp_large
Drop table test.tmp_large
Query OK, 0 rows affected (10.46 sec) mysql-uroot-e "start slave;"
Cd / u01Compact bakterScreen-S weixi_drop_table for i in `seq 270-1 1`
Do sleep 2 × truncate-s ${I} G tmp_tbl.ibd;done
Rm-rf tmp_tbl.ibd
[performance comparison]
Once in the middle ctrl-C, you can see the comparison of io before and after truncate, which basically has little impact.
The file size was also updated successfully
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.