Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to quickly complete mysql data migration

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to quickly complete mysql data migration". In daily operation, I believe many people have doubts about how to quickly complete mysql data migration. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to quickly complete mysql data migration". Next, please follow the editor to study!

Background

Last month, I was a WeChat Mini Programs with friends. Taking advantage of the popularity of the May 20 holiday, we quickly accumulated more than 1 million users in two weeks. We added a formid collection site on the Mini Program page, which is used to send template message notifications to Wechat users.

The initial back-end logic of this Mini Program is written in the douchat framework, using the dc_mp_fans table included in the framework to store the authorized login user information on Wechat, and the dc_mp_tempmsg table to store formid. So far, more than 3.8 million data have been collected, and a large part of formid has been successfully used to send template notifications to users, which has played a good effect of secondary promotion.

With the increase of the amount of data, the previously used server space began to run out of space. Recently, I wrote a new framework specifically for background development of Mini Program, so I wanted to migrate the original data to the database of the new system. Bought a 4-core 8G machine and began to do data migration. The following is a simple record of the migration process.

Img

Choice of scheme

Mysqldump migration

In normal development, the data backup migration method that we often use is to export a sql file with mysqldump tool, and then import sql into the new database to complete the data migration. The experiment found that it takes a few minutes to export a million-level database into a sql file through mysqldump, and the size of the exported sql file is about 1G, and then it takes a few minutes to copy the 1G sql file to another server through the scp command. I used the source command to import data in the database of the new server. I ran all night without importing the data, and the cpu was full.

Script migration

It is a relatively convenient way to export and import data directly through the command line operation database, but when the amount of data is large, it is often time-consuming and requires high performance of the server. If the data migration time requirement is not very high, you can try to write a script to migrate the data. Although I didn't actually try it, I thought there were probably two scripting schemes.

The first way is to run a migration script in the migration target server, remotely connect to the database of the source data server, set query conditions, read the source data in blocks, and write to the target database after reading. This kind of migration may be inefficient, and data export and import are equivalent to a synchronous process that needs to be read before it can be written. If the query conditions are designed reasonably, multiple migration scripts can also be started in a multi-threaded way to achieve the effect of parallel migration.

The second way, you can combine redis to build a "production + consumption" migration plan. As a data producer, the source data server can run a multi-threaded script on the source data server, read the data in the database in parallel and write the data to the redis queue. As a consumer, the target server also runs a multithreaded script on the target server, connects to redis remotely, reads the data in the redis queue in parallel, and writes the read data to the target database. Compared with the first method, this method is an asynchronous scheme, data import and data export can be carried out at the same time, through the redis as the data transfer station, the efficiency will be greatly improved.

You can use go language to write migration scripts, making use of its native concurrency characteristics, you can achieve the purpose of parallel migration of data and improve the efficiency of migration.

File migration

The efficiency of the first migration scheme is too low, and the coding cost of the second migration scheme is high. Through comparison and analysis of the data found on the Internet, I finally choose to complete the migration of millions of data in the form of import and export files through the commands of select data into outfile file.txt and load data infile file.txt into table of mysql.

Migration process

Export data files in the source database

Select * from dc_mp_fans into outfile'/ data/fans.txt'

Copy data files to the target server

Zip fans.zip / data/fans.txtscp fans.zip root@ip:/data/

Import files in the target database

Unzip / data/fans.zip load data infile'/ data/fans.txt' into table wxa_fans (id,appid,openid,unionid,@dummy,created_at,@dummy,nickname,gender,avatar_url,@dummy,@dummy,@dummy,@dummy,language,country,province,city,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy)

Following these steps, the cross-server migration of a million-level data table is completed in a few minutes.

Attention item

Mysql Security item Settings

To execute load data infile and into outfile commands in mysql, you need to enable the secure_file_priv option in mysql. You can check whether mysql has this option enabled through show global variables like'% secure%';. The default value, Null flag, does not allow import and export commands to be executed. Modify the mysql configuration item through vim / etc/my.cnf and set the value of secure_file_priv to empty:

[mysqld] secure_file_priv=''

You can import and export data files through the command.

Imported and exported datasheet fields do not correspond

The above example migrates data from the dc_mp_fans table of the source database to the wxa_ fans table of the target database. The fields of the two data tables are:-dc_mp_fans.

Img

Wxa_fans

Img

When importing data, you can set the field name to match the data of the target field, and you can discard the unwanted target field data through @ dummy.

Summary

Combined with the experience of this data migration, to sum up: a small amount of data can be imported and exported using the mysqldump command, which is simple and convenient. -when you have a large amount of data and have enough patience to migrate, you can choose to write your own script and choose an appropriate parallel solution to migrate the data, which has a high coding cost. -when the amount of data is large and you want to complete the data migration within a short period of time, you can use mysql to import and export files, which is more efficient.

At this point, the study on "how to quickly complete mysql data migration" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report