Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to achieve 570000 writes per second in MySQL

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to achieve 570000 MySQL write per second, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

I. demand

A friend received a demand and received a data written in 2 billion + from big data platform, which needs to be quickly loaded into MySQL for business presentation the next day.

Second, the reanalysis of realization

For single table 2 billion, in MySQL operation and maintenance, to be honest, this piece is currently less involved and has little experience, but for InnoDB single table Insert, if the memory is larger than the data, it can be maintained at 100000-150000 rows. But most of the time we accept projects with more data than memory. Here use XeLabs TokuDB to do a test.

III. Introduction to XeLabs TokuDB

Project address: https://github.com/XeLabs/tokudb

Optimization relative to the official TokuDB:

Built-in jemalloc memory allocation

Introduce more built-in TokuDB performance metrics

Support for Xtrabackup backup

Introduce ZSTD compression algorithm

Support for binlog_group_commit features of TokuDB

Fourth, test table

TokuDB core configuration:

Loose_tokudb_cache_size=4G loose_tokudb_directio=ON loose_tokudb_fsync_log_period=1000 tokudb_commit_sync=0

Table structure

CREATE TABLE `bigint (20) unsigned NOT NULL COMMENT 'user id/ phone number', `weight` varchar (5) DEFAULT NULL COMMENT 'and weight (KG)', `level`varchar (20) DEFAULT NULL COMMENT 'heavyweight', `beat_ rate`varchar (12) DEFAULT NULL COMMENT 'failure rate', `level_ num`int (10) DEFAULT NULL COMMENT 'same tonnage', UNIQUE KEY `utilisuserid` (`user_ id`) ENGINE=TokuDB DEFAULT CHARSET=utf8

Using load data to write data

Root@localhost [zst] > LOAD DATA INFILE'/ u01/work/134-136.txt'\ INTO TABLE user_summary (user_id, weight, level, beat_rate,level_num); Query OK, 200000000 rows affected (5 min 48.30 sec) Records: 200000000 Deleted: 0 Skipped: 0 Warnings: 0

Calculate the write speed per second:

Root@localhost [zst] > select 200000000 / (5.600.48.30); +-+ | 200000000 / (5.600.48.30) | +-- + | 574217.6285 | +-+ 1 row in set (0.00 sec)

File size:

-rw-r--r-- 1 root root 8.5G November 25 20:05 134-136.txt-rw-r- 1 mysql mysql 8.6K November 25 20:44 user_summary.frm-rw-r- 1 mysql mysql 3.5G November 25 20:51 user_summary_main_229_1_1d_B_0.tokudb

The actual file is 8.5 gigabytes, and the written TokuDB size is 3.5 gigabytes, which is only close to half of the compression of more than half of the points. For 2 billion data writes, the actual test can be completed in more than 58 minutes. It can meet the actual needs. In addition, for machines with good disk IO (SSD disk, cloud disk), if the memory is similar to the data, the data volume test of this order of magnitude needs to be added in Innodb, which can be completed in a little more than 3 small. From the perspective of *, both Innodb and TokuDB write the same data, and InnoDB takes about 4 times as long as TokuDB3. File size difference, same 2 billion data:

-rw-r- 1 mysql mysql 35G November 25 23:29 user2_main_26a_1_1d_B_0.tokudb-rw-r- 1 mysql mysql 176G November 26 03:32 user5.ibd

The file size is 5 times the size of the difference.

Test conclusion:

Using TokuDB in a cloud environment with 8 cores of 8 gigabytes of memory and 500 gigabytes of high-speed cloud disk, you can easily achieve 570000 writes per second through multiple tests.

In addition, several testing scenarios can also be used for your reference: if the primary key with self-increment is used in TokuDB, the write speed is reduced obviously because the primary key has no value, and 200 million data is also written with a self-built primary key:

Root@localhost [zst] > CREATE TABLE `user3` (- > `level_ id` bigint (20) unsigned NOT NULL COMMENT 'user id/ phone number',-> `weight` varchar (5) DEFAULT NULL COMMENT 'and weight (KG)',-> `level`varchar (20) DEFAULT NULL COMMENT 'heavyweight',-> `beat_ rate`varchar (12) DEFAULT NULL COMMENT 'failure rate',-> `level_ num`int (10) DEFAULT NULL COMMENT 'same tonnage',-> `id` bigint (20) NOT NULL AUTO_INCREMENT -> PRIMARY KEY (`id`),-> UNIQUE KEY `uroomuserid` (`user_ id`)->) ENGINE=TokuDB Query OK, 0 rows affected (0.03 sec) root@localhost [zst] > LOAD DATA INFILE'/ u01/work/134-136.txt 'INTO TABLE user3 (user_id, weight, level, beat_rate,level_num); Query OK, 200000000 rows affected (22 min 43.62 sec) Records: 200000000 Deleted: 0 Skipped: 0 Warnings: 0

When the same data is written in the primary key, the Bulk loader data feature of TokuDB cannot be used, which is equivalent to the conversion to a single Insert implementation, so the effect is much slower.

With regard to TokuDB Bulk Loader prerequisites, this table is empty and can also be used for self-incrementing columns, such as if they have values. It is suggested that in practical use, if the self-increasing column has a value, you can consider removing the self-increasing attribute and changing it to a unique index, so as to reduce some processing logic of self-increasing, so that TokuDB can run faster. In addition, in order to pursue faster write in Bulk Loader processing, compression is not very good.

The above is all the contents of the article "how to achieve 570000 MySQL writes per second". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report