In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to achieve data migration in Mysql. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.
There are three methods commonly used for MySQL migration:
1. Export the database directly, copy the file to the new server, and import it on the new server.
2. Use third-party migration tools.
3. The data files and database table structure files are directly copied to the new server and mounted to the MySQL service with the same configuration.
The advantages of the first scheme: it will rebuild the data file, reduce the occupied space of the data file, the compatibility is the best, the export and import has few problems, and the demand is flexible. Cons: using traditional export to import takes a long time.
The advantage of the second scheme: after the setting is completed, the transmission is unattended and completed automatically. Disadvantages: inflexible, tedious setup, long transmission time, it is difficult to continue transmission from the abnormal location after the exception.
The advantages of the third scheme are as follows: short time occupation, file breakpoint transfer and few operation steps. Cons: the MySQL version and configuration must be the same in the old and new servers, which may cause unknown problems.
If the database migration is due to business bottlenecks or project changes that need to change the structure of the data table (such as partitioned tables), we can only use the first method.
I usually use SELECT INTO OUTFILE and LOAD DATA INFILE of MySQL to quickly export and import data.
[export Export work preparation]
Close the log before export to avoid frequent logging during data backup
Delete the primary key and turn off automatic growth. The primary key is not very useful in this table, and automatic growth is needed (the automatically growing column in mysql must be key, so it is set as the primary key), and wait for the data transfer to be reset.
Delete the index in the table. The existence of the index will greatly affect the speed when inserting data, so close it first, transfer it and then re-establish it.
Mysql system parameters are tuned as follows: (the specific meaning is given later)
Innodb_data_file_path = ibdata1:1G:autoextend
Innodb_file_per_table = 1
Innodb_thread_concurrency = 20
Innodb_flush_log_at_trx_commit = 1
Innodb_log_file_size = 256m
Innodb_log_files_in_group = 3
Innodb_max_dirty_pages_pct = 50
Innodb_lock_wait_timeout = 120
Key_buffer_size=400M
Innodb_buffer_pool_size=4G
Innodb_additional_mem_pool_size=20M
Innodb_log_buffer_size=20M
Query_cache_size=40M
Read_buffer_size=4M
Read_rnd_buffer_size=8M
Tmp_table_size=16M
Max_allowed_packet = 32m
[operation method and results]
(1) create table T2 as select * from T1
CREATE TABLE dn_location3
PARTITION BY RANGE (UNIX_TIMESTAMP (UPLOADTIME))
(PARTITION p141109 VALUES LESS THAN (UNIX_TIMESTAMP ('2014-11-09 00 UNIX_TIMESTAMP))
PARTITION p141110 VALUES LESS THAN (UNIX_TIMESTAMP ('2014-11-10 00-100-00-00-00-00-00-00-00-00))
PARTITION p141111 VALUES LESS THAN (UNIX_TIMESTAMP ('2014-11-11-11 00 UNIX_TIMESTAMP))
PARTITION p141112 VALUES LESS THAN (UNIX_TIMESTAMP ('2014-11-12 00 UNIX_TIMESTAMP))
)
As select * from dn_location
Where uploadtime > '2014-08-04'
Create table T2 as select * from dn_location2
The T2 table (the new table) created by as lacks the index information of the T1 table (source table). Only the table structure is the same and there is no index.
This method is more efficient. In the previous experimental environment, 4600W of data in a table was transferred to a new table in 42min. I added partitioning operations to the new create table, so the new table was successfully created as a partitioned table, which transferred the data and created the partitioned table. The average speed of this method is 6570W / h. Other points needing attention in this method have not been understood for the time being.
(2) using SELECT INTO OUTFILE and LOAD DATA INFILE of MySQL
LOAD DATA INFILE statements are read into a table at a high speed from a text file. When users use SELECT back and forth. When INTO OUTFILE and LOAD DATA INFILE write data from a database to a file and then read it into the database from the file, the fields and line processing options of the two commands must match. Otherwise, LOAD DATA INFILE will not interpret the contents of the file correctly.
Suppose the user uses SELECT... INTO OUTFILE writes data to a file in comma-separated fields:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY', 'FROM table2
In order to read back a comma-separated file, the correct statement should be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY','
If the user tries to read the file with the statement shown below, it will not work because the command LOAD DATA INFILE distinguishes the field value with a locator:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY'\ t'
Here are the commands I use to import and export:
Select * into outfile 'ddd.txt' fields terminated by', 'from dn_location
Load data infile 'ddd.txt' into table dn_location2 FIELDS TERMINATED BY','
The data exported by this method stores the data of each field (only data, not table structure) in a file, separated by commas, because the file does not contain database or table names, so you need to be clear when importing and exporting. In this method, 160 million records are exported within 18 minutes, and 6472 W records are imported into 46min. The average speed is 8442W / h. Mysql official documentation also shows that this method is 20 times faster than inserting a piece of data at once.
This is the end of the article on "how to achieve data migration in Mysql". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.