In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
0. Guide reading
What is an ibdata1 file?
Ibdata1 is a file used to build the tablespace of the innodb system. This file contains the metadata of the innodb table, undo records, modify buffer, and double write buffer. If the file-per-table option is turned on, the file does not necessarily contain data for all tables. When the innodb_file_per_table option is turned on, the data and indexes of the newly created table are not stored in the system tablespace, but in the .ibd file of the respective table.
Obviously this file will get bigger and bigger, and the innodb_autoextend_increment option specifies the automatic growth of the file each time. The default is 8m.
What causes the ibdata1 file to get bigger and bigger?
Ibdata1 stores data, index and cache, etc., and is the most important data of MYSQL. So as the database gets bigger, so does the table, which is inevitable. If it takes a long time and gets bigger and bigger, it will be less convenient for us to deal with logs and space, and we don't know where to start. Next we have to deal with such a situation, sub-library storage of data.
What should I do when I encounter a surge in the size of InnoDB's shared tablespace file ibdata1 file?
1. The background of the question
Using MySQL/InnoDB children's shoes may also have troubles, I do not know why, the ibdata1 file inexplicably enlarged, do not know how to make it shrink back, just like a man's belly after 30 years old, sweat ah, congratulations is that my belly has not grown out, hoho~
Before we officially begin, we need to know what the ibdata1 file is for.
The ibdata1 file is the shared tablespace file of the InnoDB storage engine, which mainly stores the following data:
Data dictionarydouble write bufferinsert buffer/change bufferrollback segmentsundo spaceForeign key constraint system tables
In addition, when the option innodb_file_per_table = 0, the InnoDB table data, the index, needs to be stored in the ibdata1 file. Starting from version 5.6.7, the default size of the ibdata1 file is 12MB, but before that, the default size was 10MB, and the related option was innodb_data_file_path. For example, this is what I generally set:
Innodb_data_file_path = ibdata1:1G:autoextend
Of course, the innodb_file_per_table = 1 InnoDB data 1 file must exist whether or not it is enabled, because it must store the necessary data that the above ibdata1 engine depends on, especially rollback segments and undo space, which are the biggest causes of the increase in ibdata1 file size, which we will discuss in more detail below.
2. Cause analysis.
We know that InnoDB supports MVCC, which is similar to ORACLE and uses undo log and redo log to implement MVCC features. When a row of data is modified in a transaction, InnoDB will store a copy of the old version of the row in undo log, and if another transaction wants to modify the row of data, it will store a copy of the latest visible version of the data in undo log, and so on. If there are currently N transactions to modify the data, you need to store N historical versions (slightly different from ORACLE The undo log of InnoDB is not entirely a physical block, but mainly a logical log, which you can view the InnoDB source code or other related information. These undo log need to wait for the transaction to finish, and then determine again whether these undo log can be deleted based on the visibility to other transactions determined by the transaction isolation level. This work is called purge (purge work is not only about deleting expired undo log, but also other things, we'll talk about it later).
So the problem is, if there is a transaction that needs to read the historical version of a large amount of data, and the transaction cannot be committed or rolled back this morning for some reason, and there are a large number of transactions that need to modify the data after the transaction is initiated, the undo log generated by these new transactions can not be deleted, forming a pile, which is one of the main reasons for the increase in ibdata1 file size. The classic scenario in this case is a large number of data backups, so we recommend that you put the backup work on a dedicated slave server, not on master server.
Another situation is that the purge work of InnoDB has been unable to purge the deleted undo log in time because of its poor performance or other reasons, which is another main reason for the increase in ibdata1 file size. This scenario occurs when the server hardware configuration is weak and does not keep up with the business development and upgrade in time.
One of the relatively rare is the existence of bug in the early version of MySQL running in a 32-bit system. When it is found that the total amount of undo log of purge exceeds a certain value, the purge thread directly gives up resistance and no longer carries out purge. This problem is encountered more often when we use the version of 32-bit MySQL 5.0. we have encountered this file rising to more than 100g. Later, we took a lot of effort to migrate these instances to 64-bit systems, and finally solved this problem.
Finally, the option innodb_data_file_path value is not adjusted or set very small in the first place, which inevitably leads to the increase of the ibdata1 file. This value has not been increased in the official my.cnf reference document provided by Percona, which makes me wonder. Is it to leave a secret door like the xx that I often complain about, so that it can be easily optimized for customers later? (my mind is too dark, bad ~)
To sum up, there are several reasons for the surge in ibdata1 file size:
There are a large number of concurrent transactions, resulting in a large number of undo log; and old transactions that have not been committed for a long time, resulting in a large number of old undo log;file iUnix with poor performance and slow purge progress; initialization settings are too small to be used; there is bug under the 32-bit system.
To add a little beside the point, another popular database PostgreSQL approach is to store the data of each historical version and the original data table space together, so there is no problem in this case, so the transaction rollback of PostgreSQL is very fast, and vaccum work needs to be done on a regular basis (see PostgreSQL's MVCC implementation mechanism for details, I may not be completely correct)
3. Suggestions on solutions
Seeing the above description of the causes of the problems, some students may think that this is easy to do. If you shrink the size of the ibdata1 file, the recycling table space will end. Tragically, up to now, InnoDB has no way to recycle / shrink the tablespaces of ibdata1 files. Once the belly of ibdata1 files is enlarged, the data can only be backed up and then restored to re-initialize the instance to restore the original size, or each independent tablespace file backup in turn will be restored to a new instance, there is no better way.
Of course, this problem is not unpreventable. According to the reasons mentioned above, the corresponding suggestions and countermeasures are:
Upgrade to 64-bit and adopt independent undo tablespaces. Version 5.6 supports independent undo tablespaces, so you no longer have to worry about making ibdata1 files bigger. When initializing settings, set ibdata1 files at least above 1GB; increase the number of purge threads, such as setting innodb_purge_threads = 8; improve the ability of file iUniplo, hurry up on SSD; submit transactions in time and don't have a backlog. Autocommit= 1 is turned on by default to avoid forgetting that a transaction has not been committed for a long time; check the development framework to make sure that autocommit=0 is set, and remember that there is an explicit commit or rollback after the transaction ends.
Summary
The above is the whole content of this article, I hope that the content of this article has a certain reference and learning value for your study or work, if you have any questions, you can leave a message and exchange, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.