In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to compare and test the batch initialization data in MySQL. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
There has always been a lot of complaint about the performance of MySQL stored procedures, to be honest, it is slow enough. Sometimes if you want to do some comparison tests, it will take a while for the stored procedure to initialize tens of thousands of pieces of data. The similar tests of Oracle have already been done. Today, I will catch the late bus and finish this unfinished task.
I roughly tested it and found that based on 1 million data, the improvement in initialization performance increased from nearly 8 minutes to more than 10 seconds.
Scenario 1: stored procedure import
The table we tested was the users,InnoDB storage engine, with a planned initialization data of 1 million.
Create table users (
Userid int (11) unsigned not null
User_name varchar (64) default null
Primary key (userid)
Engine=innodb default charset=UTF8; initializes the data in the following way, so we use the stored procedure approach.
Delimiter $$
Drop procedure if exists proc_auto_insertdata$$
Create procedure proc_auto_insertdata ()
Begin
Declare
Init_data integer default 1
While init_data source create_proc.sql
Query OK, 0 rows affected, 1 warning (0.04 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 1 row affected (47.41 sec), so although this process is achieved in one step, but the performance is still not satisfactory. I think some students have different performance under different configurations, and some students have reached nearly 50 minutes. At this point, it is based on your own test environment, and then you can get a gradient of data.
Let's take a look at the second plan.
Scenario 2: use memory tables
In the second scenario, we try to optimize using memory tables so that we need to create a memory table, such as users_memory.
Create table users_memory (
Userid int (11) unsigned not null
User_name varchar (64) default null
Primary key (userid)
Engine=memory default charset=UTF8; then uses the following stored procedure to import the data, but the logic is almost the same as the first stored procedure, except for the table name, in which the data is entered into the memory table.
Delimiter $$
Drop procedure if exists proc_auto_insertdata$$
Create procedure proc_auto_insertdata ()
Begin
Declare
Init_data integer default 1
While init_data source create_proc_mem.sql
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (4 min 40.23 sec) this process takes nearly 5 minutes, and the rest of the memory table data is imported into the InnoDB table quickly, which can be done in a few seconds.
> insert into users select * from users_memory
The whole process takes less than five minutes, which is much faster than the first scheme.
Scenario 3: use programs / scripts to generate data and import in bulk
The third solution is just to throw a brick to attract jade, if you are familiar with php, you can write entirely in php, familiar with which language scripts, as long as the implementation of the requirements. For example, I use shell, and I don't use any special skills.
The shell script contains the following:
For i in {1..1000000}
Do
Echo $iMar username roomi
The done > a.lst script is very simple, the process of generating data takes about 8 seconds, and the file is about 18m.
# time sh a.sh
Real 0m8.366s
User 0m6.312s
Sys 0m2.039s then uses load data to import the data, and the whole process takes about 8 seconds, so the whole process takes less than 19 seconds.
> load data infile'/ U01Universe TestData.lst 'into table users fields terminated by','
Query OK, 1000000 rows affected (8.05sec)
Records: 1000000 Deleted: 0 Skipped: 0 Warnings: 0
Scenario 4: memory table, external file import hybrid
The fourth scheme is conceived on an ad hoc basis and combines some of the characteristics of these schemes. of course, it cannot be said to be the best.
First, use the script to generate the data, or as in scenario 3, estimate it for 9 seconds and import the data into the memory table users_memory.
> load data infile'/ U01Universe TestData.lst 'into table users_memory fields terminated by','
Query OK, 1000000 rows affected (1.91 sec)
Records: 1000000 Deleted: 0 Skipped: 0 Warnings: 0 then import the data from the memory table into the target table users
> insert into users select * from users_memory
Query OK, 1000000 rows affected (7.48 sec)
Records: 1000000 Duplicates: 0 Warnings: 0 the whole process takes 18 seconds, which is similar to the third scheme and looks a little more complicated or verbose.
The above content is how to compare and test the batch initialization data in MySQL. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.