In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
In the innodb storage engine, there is an optional technical module called doublewrite. It is controlled by the value of the parameter InnoDB_doublewrite. If 0 means it is not enabled, you can view it through show status like "% InnoDB_dblwr%". What doublewrite technology brings to innodb storage engine is the reliability of data pages. the following parsing of doublewrite technology allows you to fully understand how doublewrite ensures the reliability of data pages.
1. Doublewrite application scenarios:
We know that the data page size of innodb is generally 16KB, the smallest unit for MySQL to access data is also the page, and the operating system can not guarantee the atomicity of a data page, that is to say, when writing data, it is possible to write half of a page (such as 8K) when the database is down, this situation is called partial write failure (partial page write), resulting in data loss.
You may ask, can't I recover data based on redo log? The answer is yes and no, which can be divided into two situations: 1, the database is down and the physical files are intact, which can be recovered through redo log. 2. When the database is down, partial page write occurs on the page that is being refreshed to the disk, and just when the data page on the disk is damaged due to downtime, the data can not be recovered through redo log. Why? We must clearly realize that redo log records the physical operation of the page! For example, a redo record "page number xx, offset 800write record" this is abc ", does this redo record make sense when the page is corrupted? So in this special case, doublewrite will come in handy!
Second, doublewrite architecture and workflow:
Doublewrite consists of two parts, one is an in-memory doublewrite buffer with a size of 2MB, and the other is a series of 128pages in the shared tablespace (ibdata x) on disk, that is, two extents (extent), which is also 2m in size. The doublewrite workflow is as follows:
1. When a series of mechanisms (main function trigger, checkpoint, etc.) trigger dirty pages in the data buffer pool for refresh, they are not written directly to the disk, but will be copied to the doublewrite buffer in memory first through the memcpy function, and then written to the physical disk of the shared tablespace twice through doublewrite buffer, each time in 1MB sequence.
2. Call the fsync function immediately to synchronize dirty pages to disk.
Because the doublewrite pages are stored continuously in this process, the disk is written sequentially with high performance; after completing the doublewrite, the dirty pages are written to the actual tablespace files, and the writes are discrete. The cooperation of each module is shown below (the first step should be the redo record logbuffer generated by the dirty page, and then the logbuffer should be written to redo log file, which is directly connected to simplify the secondary steps):
To view the operation of doublewrite, you can execute the command:
Mysql > show global status like 'innodb_dblwr%'\ G
* * 1. Row *
Variable_name: Innodb_dblwr_pages_written
Value: 61932183
* 2. Row * *
Variable_name: Innodb_dblwr_writes
Value: 15237891
2 rows in set (0.00 sec)
The above data show that doublewrite has written a total of 61932183 pages and 15237891 times. From this group of data, we can analyze. As mentioned before, after enabling doublewrite, each dirty page refresh must first write doublewrite, while doublewrite exists on disk with two consecutive sections, each consisting of consecutive pages. In general, an area has a maximum of 64 pages, so an IO write should be able to write up to 64 pages. According to the above ratio of my system Innodb_dblwr_pages_written to Innodb_dblwr_writes, it is about 4, far less than 64, so it can be seen from this point of view that the writing pressure of the system is not high.
III. Recovery from collapse
If the operating system crashes while writing a page to disk, as shown in the figure above, during the recovery process, the innodb storage engine can find a recent copy of the page from the doublewrite of the shared tablespace, copy it to the tablespace file, and then apply redo log to complete the recovery process. Because there are replicas, there is no worry about whether the data pages in the tablespace are corrupted.
IV. Suggestions
After the introduction of doublewrite technology into the innodb storage engine, although the parameter skip_innodb_doublewrite can prohibit the use of doublewrite function, it is strongly recommended that you use doublewrite. Avoid partial write failure. Of course, some file systems themselves provide a partial write failure prevention mechanism, such as the ZFS file system. In this case, doublewrite can not be enabled.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.