Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand MySQL's Insert buffer

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "how to understand the Insert buffer of MySQL". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

We know that when inserting, the data pages are stored in the order in which the primary key id is executed, but for nonclustered indexes, leaf nodes are no longer inserted sequentially.

For example, insert the following table structure

Create table tab (

Id int auto_increment

Name varchar (30)

Primary key (id)

Key (name)

) engine=innodb default charset=utf8

Nanme is a non-unique field, which requires discrete access to the nonclustered index page, where insertion performance is degraded. However, this is not the error of the index on this name field, because the nature of the B+ tree determines the discreteness of nonclustered index insertion.

In order to solve the poor random write performance of nonclustered indexes, the InnoDB storage engine developed innsert-buffer pool (enhanced in 5.5and called change buffer pool).

What is innsert-buffer pool?

Innodb uses insert buffer to "deceive" the database: for non-unique indexes, the modification operation of the secondary index is not to update the leaf pages of the index in real time, but to merge several updates to the same page into an one-time update operation, transforming random IO into sequential IO, which can avoid the performance loss caused by random IO and improve the write performance of the database.

1.1 principle:

A first determine whether the page to be updated is in memory.

If not, read the index page and store it in Insert Buffer, and combine the leaf nodes in the non-unique index and index page according to the scheduling rules of Master Thread.

1.2 scheduling rules for Master Thread

An active merger [innodb main thread is completed periodically, user thread is not aware of it]

Active merger:

Principle: the active merge is judged by the innodb main thread (svr_master_thread): if the Imax O occurring in the past 1 s is less than 5% of the system Imax O capacity

Then take the initiative to perform a meger operation of insert buffer. The number of pages in meger is 5% of the capacity of the system IBank O, and the async io mode is used for reading.

Every 10s, an insert buffer meger operation must be triggered. The number of pages in meger is still 5 per cent of the capacity of the system.

Steps:

1. The main thread issues an async io request, and the async reads the indexed pages that need to be meger

The 2.I/O handler thread, after receiving the completed async I _ 2.I/O handler O, performs the merger

B passive merge [user thread completion, user can feel the performance impact of meger operation]

Passive merge:

Situation 1:

Insert operation, resulting in insufficient page space, need to split (split). Because insert buffer is only for a single page and cannot buffer page split [the page is already in memory], it causes the passive meger of the page. Similarly, the update operation results in insufficient page space; purge causes the page to be empty, and so on. In short: if the current action causes a page split or merge, it will result in a passive merge.

Case 2:

Insert operation, due to various other reasons, insert buffer optimization returns false, when you really need to read page, you need to do passive merge. Unlike one, the page is on disk and needs to be read into memory.

Situation 3:

During the insert buffer operation, it is found that the insert buffer is too large and the insert buffer needs to be compressed. In this case, the passive merge needs to be forced, and the insert operation is not allowed.

Second, why is a non-unique index required?

Because

The primary key is the only identifier for the row. When the app writes to the row, it is inserted in the order in which the primary key is incremented, and the exception insertion clustered index is generally sequential, so random IO is not required.

2 write a unique index to check whether the record exists, so before modifying the unique index, you must read the index page related to the modified record to know if it is unique, so Insert buffer is meaningless, anyway, it has to be read out (read with random IO), so it is only valid for non-unique indexes.

3 how to view insert buffer

We can view the information of the insert buffer through show engine innodb status\ G.

-

INSERT BUFFER AND ADAPTIVE HASH INDEX

-

Ibuf: size 1, free list len 0, seg size 2, 2920 merges

Merged operations:

Insert 23858, delete mark 0, delete 0

Discarded operations:

Insert 0, delete mark 0, delete 0

Seg size shows that the size of the current insert buffer is 2 * 16KB, which is about 32KB Free list len represents the length of the free list, and size represents the number of pages that have been merged. Merges represents the number of mergers.

Merged operations:

Inserts represents the number of records inserted, and the number of delete mark delete is 0. 0.

Four insert buffer enhanced change buffering

Change buffering is a new feature added by MySQL5.5, change buffering is an enhancement of insert buffer, insert buffer is effective only for insert, and change buffering is effective for insert, delete, update (delete+insert) and purge. When modifying the data of an index block (secondary index), the index block does not exist in buffter pool, and the modification information will be cache in change buffer. When the required index block is read to buffer pool through index scanning, it will be merged with the modification information in change buffer, and then written back to disk. The goal is also to reduce the performance loss caused by random IO, to be clear: turn random IO into sequential IO as much as possible.

This is the end of the content of "how to understand MySQL's Insert buffer". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report