Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to improve the copy efficiency of Big data Table in Mysql

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to improve the copy efficiency of big data tables in Mysql. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.

Possible situation:

1. Cause the database to crash or get stuck.

two。 Causes other processes to read and write to the database slow down

3. Another possibility is that data cannot be written due to inconsistent data format (for example, if a varchar type is changed to int type, an error will be reported when the data length is too large)

Solution:--

1. Recreating a data table create new_table select * from old_table is equivalent to copying a new data table-(not recommended): only the fields and data of the data table are copied, but the table structure primary keys, indexes and default values are not copied.

two。 Divided into two steps

1)。 Create new_table like old_table creates a new table with the same table structure as old_table (including primary keys, indexes, default values, etc.)

2)。 Insert into new_table select * from old_table copies all the data from old_table to new_table

-(if the amount of data is small, it is recommended to use this scheme in tens of thousands of rows, and this is not applicable if the amount of data reaches millions or tens of millions.)

Extension: if you only need to copy a part of the data table, you can specify insert into new_table (field 1, field 2) select field 1, field 2 from old_table [limit nmam]

3.

1)。 Export datasheet data through the select from into outfile command

2)。 Import datasheet data through the load data infile into command

Look directly at the picture without much nonsense and feel the difference between the processing speed of scheme 2 and scheme 3 with a data volume of about 1 million.

> select * from money_info into outfile'/ var/lib/mysql-files/money.txt'; > create table money_info_cyq11 like money_info; > load data infile'/ var/lib/mysql-files/money.txt' into table money_info_cyq11; > create table money_info_cyq22 like money_info; > insert into money_info_cyq22 select * from money_info

The speed is about 4 times, and 20 times what is said on the Internet has not yet been experienced [covering your face]

Note: there is another problem here.

Outfile's directory is required.

> show variables like'% secure%'

Through this command, you can see where the directory of secure_file_priv corresponds to out_file, and you can specify this location to export.

So much for sharing about how to improve the copy efficiency of big data tables in Mysql. I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report