In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to use the parameter net-buffer-length in mysqldump. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Net-buffer-length of mysqldump
A master said that the net-buffer-length parameter of mysqldump is a chicken rib, which is not very good for performance improvement. I haven't paid any attention to it before. An occasional test. I ran into it. I just studied it. It's actually very useful (for a rookie like us).
Let's talk about it with an example:
Before we talk about net-buffer-length, let's talk about another parameter of mysqldump-extended-insert.
This parameter means whether to enable merge insert (default is on, do not want to enable direct plus skip-extended-insert). In vernacular, it is to use mysqldump to export the generated insert data and merge it into one. If it is not opened, it will be a piece of data and an insert.
When enabled, the exported format is similar to this: insert into table_name values (xx,xx,xx), (xx,xx,xx), (xx,xx,xx).
The format of the export after closing is similar to this: insert into table_name values (xx,xx,xx)
Insert into table_name values (xx,xx,xx)
Insert into table_name values (xx,xx,xx)
.
.
.
Ok, this parameter should be explained clearly. The ideal is very plump, but the reality is very bony.
The reality is that the extended-insert parameter is enabled. If the data exceeds 1m, multiple insert will be generated.
[root@testdb3] # / home/mysql3310/bin/mysqldump-S / tmp/mysql3310.sock-- databases bigdata-- tables T1 > t1.sql
[root@testdb3 ~] # cat t1.sql | grep INSERT | wc-l
one hundred and twenty five
This introduces the parameter net-buffer-length.
The official explanation of net-buffer-length for mysqldump (after 5.7.5, officials recommend using mysqlpump) is the size of cached data when communicating. The minimum is 4k, the maximum is 16m, and the default is 1m.
As we all know, the data exported by msyqldump consists of two parts, one is DDL (including sql statements such as table building, storage building, view building and so on), and the other is insert. All the data is generated insert, so insert is the biggest part of mysqldump.
Combined with the situation mentioned above. When extended-insert is enabled, a table should theoretically generate only one insert, but if the data of an insert exceeds 1m (the default), a second insert will be generated, if it exceeds 1m, a third insert will be generated, and so on until all the data is imported.
Let's take a look at an example:
[root@testdb3] # / home/mysql3310/bin/mysqldump-S / tmp/mysql3310.sock-- databases bigdata-- tables T1 > t1.sql
[root@testdb3 ~] # cat t1.sql | grep INSERT | wc-l
one hundred and twenty five
[root@testdb3 ~] # sed-n '99p' t1.sql > t2.sql
[root@testdb3 ~] # ls-al t2.sql
-rw-r--r--. 1 root root 1042300 Jan 19 10:44 t2.sql
You can see that an insert is about 1m.
Now add net-buffer-length to test:
[root@testdb3] # / home/mysql3310/bin/mysqldump-S / tmp/mysql3310.sock-- databases bigdata-- tables T1-- net-buffer-length=5000000 > t3.sql
[root@testdb3 ~] # sed-n '100p' t3.sql > t4.sql
[root@testdb3 ~] # ls-al t4.sql
-rw-r--r--. 1 root root 4979542 Jan 19 10:47 t4.sql
If you set up about 5m, if you see an insert, it's about 5m in size.
The maximum is 16m.
[root@testdb3] # / home/mysql3310/bin/mysqldump-S / tmp/mysql3310.sock-- databases bigdata-- tables T1-- net-buffer-length=25000000 > t5.sql
[root@testdb3 ~] # sed-n '100p' t5.sql > t6.sql
[root@testdb3 ~] # ls-al t6.sql
-rw-r--r--. 1 root root 16712034 Jan 19 10:59 t6.sql
25m is set, but an insert is still only 16m.
Let's take a look at import. Import means there will be as many insert transactions as there are by export (provided that autocommit is enabled).
So when we are in source, we will drive to something like the following prompt:
Query OK, 4053 rows affected (0.16 sec)
Records: 4053 Duplicates: 0 Warnings: 0
Query OK, 4053 rows affected (0.51sec)
Records: 4053 Duplicates: 0 Warnings: 0
Query OK, 4053 rows affected (0.16 sec)
Records: 4053 Duplicates: 0 Warnings: 0
Query OK, 4053 rows affected (0.16 sec)
Records: 4053 Duplicates: 0 Warnings: 0
Query OK, 4053 rows affected (0.16 sec)
Records: 4053 Duplicates: 0 Warnings: 0
This is because when exporting, the default is 1m, which happens to be 1m on 4053 lines, so there will be such a prompt. If it is not 1m, there will be the following prompt:
Query OK, 19364 rows affected (0.74 sec)
Records: 19364 Duplicates: 0 Warnings: 0
Query OK, 19363 rows affected (.75 sec)
Records: 19363 Duplicates: 0 Warnings: 0
Query OK, 19363 rows affected (.75 sec)
Records: 19363 Duplicates: 0 Warnings: 0
Query OK, 19363 rows affected (.75 sec)
Records: 19363 Duplicates: 0 Warnings: 0
Query OK, 19364 rows affected (0.74 sec)
Records: 19364 Duplicates: 0 Warnings: 0
What is set here is 5 M, which is about 19364 rows.
Both examples have the same table structure and data.
Another parameter max_allowed_packet is involved in the import. If this value is set too low, the data will not be imported. It is as follows:
Mysql > set global max_allowed_packet=1048576
Mysql > show variables like 'max%'
+-+
| | Variable_name | Value |
+-+
| | max_allowed_packet | 1048576 | |
Mysql > source t4.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 18
Current database: tt2
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 19
Current database: tt2
ERROR 2006 (HY000): MySQL server has gone away
Mysql > source t2.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 20
Current database: tt2
Query OK, 4053 rows affected (0.51sec)
Records: 4053 Duplicates: 0 Warnings: 0
Max_allowed_packet is set to 1 M sql 4. SQL is the previous 5 M SQL, and the import will fail. Take a look at the error log:
2017-01-19T13:48:09.975902+08:00 5 [Note] Aborted connection 5 to db: 'tt2' user:' root' host: 'localhost' (Got a packet bigger than' max_allowed_packet' bytes)
There will be a hint.. Just mention it here by the way.
Okay。 At this point, the function of net-buffer-length is basically clear. Some people say that net-buffer-length has little effect on performance improvement. I didn't do the performance test, which must have improved in theory.
First of all, the performance of the export must have improved. Every 1m will be segmented, and 16m will be segmented. If it is exported by big data (specific how big, in G units), it will certainly be improved. If the data is small, of course there will be no effect.
Secondly, import. When importing, if 1m is about to commit a transaction and 16m is committing a transaction, if the disk is fast enough, it will certainly save a lot of performance and time.
Thank you for reading! This is the end of the article on "how to use the parameter net-buffer-length in mysqldump". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.