In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article will explain in detail the ways of bulk insertion of MyBatis for you. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
Preface
There are three main ways to batch data:
Execute a single insert statement repeatedly
Foreach splicing sql
Batch processing
I. preliminary preparation
Based on Spring Boot + Mysql, and to omit get/set, lombok is used. For more information, please see pom.xml.
1.1 Table structure
Id uses database self-increment.
DROP TABLE IF EXISTS `user_info_ batch` CREATE TABLE `user_info_ batch` (`id`batch`) unsigned NOT NULL AUTO_INCREMENT COMMENT 'primary key id', `user_ name` varchar' account name, `pass_ word` varchar (100) NOT NULL COMMENT 'login password', `nick_ name` varchar (30) NOT NULL COMMENT 'nickname' `mobile` varchar (30) NOT NULL COMMENT 'mobile number', `email` varchar (100) DEFAULT NULL COMMENT 'email address', 'gmt_ create`timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT' creation time', `gmt_ update `update time' PRIMARY KEY (`id`) USING BTREE) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC COMMENT 'Mybatis Batch' 1.2 Project profile
If you are careful, you may have found that the database url is followed by a rewriteBatchedStatements=true. What's the use? Don't worry, it will be introduced later.
# Database configuration spring: datasource: url: jdbc:mysql://47.111.118.152:3306/mybatis?rewriteBatchedStatements=true username: mybatis password: password driver-class-name: com.mysql.cj.jdbc.Driver# mybatismybatis: mapper-locations: classpath:mapper/*.xml type-aliases-package: cn.van.mybatis.batch.entity1.3 entity class
@ Data@Accessors (chain = true) public class UserInfoBatchDO implements Serializable {private Long id; private String userName; private String passWord; private String nickName; private String mobile; private String email; private LocalDateTime gmtCreate; private LocalDateTime gmtUpdate;}
1.4 UserInfoBatchMapper
Public interface UserInfoBatchMapper {/ * * insert * @ param info * @ return * / int insert (UserInfoBatchDO info); / * foreach insert * @ param list * @ return * / int batchInsert (List list);}
1.5 UserInfoBatchMapper.xml
Insert into user_info_batch (user_name, pass_word, nick_name, mobile, email, gmt_create, gmt_update) values (# {userName,jdbcType=VARCHAR}, # {passWord,jdbcType=VARCHAR}, # {nickName,jdbcType=VARCHAR}, # {mobile,jdbcType=VARCHAR}, # {email,jdbcType=VARCHAR}, # {gmtCreate,jdbcType=TIMESTAMP}, # {gmtUpdate,jdbcType=TIMESTAMP}) insert into user_info_batch (user_name, pass_word, nick_name, mobile, email, gmt_create Gmt_update) values (# {item.userName,jdbcType=VARCHAR}, # {item.passWord,jdbcType=VARCHAR}, # {item.nickName,jdbcType=VARCHAR}, # {item.mobile,jdbcType=VARCHAR}, # {item.email,jdbcType=VARCHAR}, # {item.gmtCreate,jdbcType=TIMESTAMP}, # {item.gmtUpdate,jdbcType=TIMESTAMP})
1.6 preliminary data
To facilitate testing, several variables are extracted and loaded in advance.
Private List list = new ArrayList (); private List lessList = new ArrayList (); private List lageList = new ArrayList (); private List warmList = new ArrayList (); / / Counting tool private StopWatch sw = new StopWatch ()
To facilitate the assembly of data, a common method has been extracted.
Private List assemblyData (int count) {List list = new ArrayList (); UserInfoBatchDO userInfoDO; for (int I = 0polii < count Van +) {userInfoDO = new UserInfoBatchDO () .setUserName ("Van") .setNickName ("Wind Dust blog") .setMobile ("17098705205") .setPassWord ("password") .setGmtUpdate (LocalDateTime.now ()); list.add (userInfoDO);} return list }
Preheating data
@ Before public void assemblyData () {list = assemblyData (200000); lessList = assemblyData (2000); lageList = assemblyData (1000000); warmList = assemblyData (5);} II. Execute a single insert statement repeatedly
Maybe lazy programmers will do this, simply nesting a for loop on a single insert statement.
2.1 corresponding mapper interface
Int insert (UserInfoBatchDO info); 2.2 Test methods
Because this method is too slow, the number is reduced to 2000.
@ Testpublic void insert () {log.info ("[program warm-up]"); for (UserInfoBatchDO userInfoBatchDO: warmList) {userInfoBatchMapper.insert (userInfoBatchDO);} log.info ("[end of warm-up]"); sw.start ("execute a single insert statement repeatedly"); / / it's too slow to insert 20w entries here, so I only inserted 2000 for (UserInfoBatchDO userInfoBatchDO: lessList) {userInfoBatchMapper.insert (userInfoBatchDO) } sw.stop (); log.info ("all cost info: {}", sw.prettyPrint ());}
2.3 execution time
For the first time
-- ms% Task name--59887 100% repeatedly execute a single insert statement
The second time
-- ms% Task name--64853 100% repeatedly execute a single insert statement
The third time
-- ms% Task name--58235 100% repeatedly execute a single insert statement
This method inserts 2000 pieces of data and executes three times in an average time of 60991 ms.
3. Foreach splicing SQL
3.1 corresponding mapper APIs
Int batchInsert (List list); 3.2 Test methods
This method and the next method both use 20w data test.
@ Testpublic void batchInsert () {log.info ("[program warm-up]"); for (UserInfoBatchDO userInfoBatchDO: warmList) {userInfoBatchMapper.insert (userInfoBatchDO);} log.info ("[warm-up end]"); sw.start ("foreach splicing sql"); userInfoBatchMapper.batchInsert (list); sw.stop (); log.info ("all cost info: {}", sw.prettyPrint ());} 3.3 execution time
For the first time
-- ms% Task name--18835 100% foreach splicing sql
The second time
-- ms% Task name--17895 100% foreach splicing sql
The third time
-- ms% Task name--19827 100% foreach splicing sql
In this way, 20w pieces of data are inserted and the average time of execution for three times is 18852 ms.
IV. Batch processing
In this way, mapper and xml reuse 2.1.
4.1 rewriteBatchedStatements parameters
At the beginning of the test, I found that the method of changing to Mybatis Batch submission did not work. In fact, it was still inserted one by one at the time of insertion, and the speed was not as fast as the original foreach splicing SQL method, which was very unscientific.
Later, it was found that if you want to execute in batch, you need to add a parameter to the connection URL string: rewriteBatchedStatements=true.
Introduction of rewriteBatchedStatements parameters
In order to achieve high-performance batch insertion, the url of MySql's JDBC connection needs to add the rewriteBatchedStatements parameter and ensure the driver version 5.1.13 or above. By default, the MySql JDBC driver will disregard the executeBatch () statement, break up a group of sql statements that we expect to execute in batches, and send them to the MySql database one by one. Batch insertion is actually a single insert, directly resulting in low performance. Only if you set the rewriteBatchedStatements parameter to true, the driver will help you execute SQL in batches. This option is valid for INSERT/UPDATE/DELETE.
4.2 batch preparation
Manual injection of SqlSessionFactory
@ Resource private SqlSessionFactory sqlSessionFactory
Test code
@ Testpublic void processInsert () {log.info ("[program warm-up]"); for (UserInfoBatchDO userInfoBatchDO: warmList) {userInfoBatchMapper.insert (userInfoBatchDO);} log.info ("[warm-up end]"); sw.start ("batch execution insert"); / / Open batch SqlSession session = sqlSessionFactory.openSession (ExecutorType.BATCH); UserInfoBatchMapper mapper = session.getMapper (UserInfoBatchMapper.class) For (int I = 0ct length = list.size (); I < length; itemized +) {mapper.insert (list.get (I)); / / every 20000 entries are submitted to prevent memory overflow if (I% 20000 entries 1999 99) {session.commit (); session.clearCache ();}} session.commit (); session.clearCache (); sw.stop () Log.info ("all cost info: {}", sw.prettyPrint ());} 4.3 execution time
For the first time
-- ms% Task name--09346 100% batch execution insert
The second time
-- ms% Task name--08890 100% batch execution insert
The third time
-- ms% Task name--09042 100% batch execution insert
In this way, 20w pieces of data are inserted and the average time of execution for three times is 9092 ms.
4.4 if the data is larger
When I expanded the data to 100w, the way foreach spliced sql could no longer be inserted, so I had to test the insert time of the batch.
When testing, you only need to cut the list in the test code into lageList tests.
For the first time
-- ms% Task name--32419 100% batch execution insert
The second time
-- ms% Task name--31935 100% batch execution insert
The third time
-- ms% Task name--33048 100% batch execution insert
This method inserts 100w pieces of data, and the average time of execution for three times is 32467 ms.
Summary of batch insertion method average time for three execution cycles insert single piece of data 200060991 msforeach splicing sql20w18852 ms batch 20w9092 ms batch 100w32467 ms
Although the efficiency of circular insertion of a single piece of data is very low, the amount of code is very small, and it can be used when the amount of data is small, but a large amount of data is prohibited, and the efficiency is too low.
Foreach splicing sql way, when using a large number of xml and sql statements to write, easy to make mistakes, although the efficiency is good, but really deal with a large amount of data, still can not be used, so it is not recommended
Batch execution is recommended when a large amount of data is inserted, and it is also convenient to use.
About "what are the ways of bulk insertion of MyBatis?" this article is shared here. I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.