In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Scene
1. Maintain a citizen system with a field of ID card number
2. The business code can guarantee that two duplicate ID numbers will not be written (if the business cannot guarantee it, you can rely on the unique index of the database for constraints)
3. Commonly used SQL query statement: SELECT name FROM CUser WHERE id_card = 'XXX'
4. Build an index
ID card number is relatively large, it is not recommended to set as the primary key from the performance point of view, choose a normal index or a unique index?
Assume that none of the values on field k are duplicated
Query process
1. Query statement: SELECT id FROM T WHERE Kappa 5
2. Query process
Starting from the root of the tree through the B + tree, the leaf node is searched by layer, that is, the data page in the lower right corner of the image above uses dichotomy to locate the specific record inside the data page.
3. For general index
Find the first record that meets the criteria (5500), and then look for the next record until you find the first record that does not meet Kroom5
4. For unique index
Because the index defines uniqueness, when the first record that meets the criteria is found, the search will stop.
Performance difference
1. Performance difference: very little
2. The data of InnoDB is read and written in terms of data pages. The default is 16KB.
3. When you need to read a record, you do not read the record itself from the disk, but read it in data pages.
4. When the record of karm5 is found, its data pages are already in memory.
5. For ordinary indexes, only one more pointer search and one more calculation are needed-the CPU consumption is very low.
If the kroom5 record happens to be the last record on the data page, then if you want to fetch the next record, the probability of reading the next data page is very low: for an integer field index, a data page (16KB compact format) can hold about 745 values
Change buffer
1. When you need to update a data page, if the data page is in memory, update it directly.
2. If the data page is not in memory, without affecting the data consistency
InnoDB will cache these update operations in change buffer without reading the data page from disk (random read). The next time the query needs to access the data page, the data page will be read into memory.
Then perform the operations in change buffer related to this data page (merge)
3. Change buffer is data that can be persisted, which is copied in memory and written to disk.
4. Record the update operation in channge buffer first, reduce the random read disk, and improve the execution speed of the statement.
5. In addition, data pages need to be read into memory using buffer pool. Using channge buffer can avoid taking up memory and improve memory utilization.
6. Change buffer uses the memory in buffer pool, which cannot be increased indefinitely. The control parameter innodb_change_buffer_max_size
# default is 25, maximum 50mysql > SHOW VARIABLES LIKE'% innodb_change_buffer_max_size%' +-- +-+ | Variable_name | Value | +-+-+ | innodb_change_buffer_max_size | 25 | +- -+
Merge
1. Merge: apply the operations in change buffer to the original data page
2. The execution process of merge
Read data pages from disk to memory (old version of data pages) find the change buffer records of this data page from change buffer (possibly multiple)
Then execute in turn, get the new version of the data page and write it to redolog, including the content: table change of the data page + change buffer change
3. After the execution of merge, the data pages in memory and the disk pages corresponding to change buffer have not been modified, so they belong to dirty pages.
Through other mechanisms, dirty pages are refreshed to the corresponding physical disk page
4. Trigger time
Access to this data page the system background thread periodically shuts down the merge database
Conditions of use
1. For unique indexes, all update operations need to determine whether the operation violates the uniqueness constraint.
2. The update of unique index cannot use change buffer, only ordinary index can use change buffer.
For example, if you want to insert (4400), you must first determine whether there is a record of change buffer in the table. The premise of this judgment is to read the data page into memory. Now that the data page has been read into memory, just update the data page in memory directly. There is no need to write change buffer.
Working with scen
1. A data page before merge, the more changes change buffer records make to this data page, the greater the profit
2. For businesses with more writing and less reading, the probability that the page will be accessed immediately after writing is very low, and the use of change buffer is the best.
For example, the system of billing and logging
3. If the update mode of a business is: query will be made immediately after writing
Although the update operation is recorded in change buffer, it is immediately queried and the data page is read from disk. Triggering the merge process does not reduce random reading, but increases the cost of maintaining change buffer.
Update process
Insert (4400)
The target page is in memory
For a unique index, find the position between 3 and 5, determine that there is no conflict, insert this value for a normal index, find the position between 3 to 5, and insert this value with little performance difference.
The target page is not in memory
1. For a unique index, you need to read the data page into memory, determine that there is no conflict, and insert this value.
The disk is read randomly and the cost is very high.
For a normal index, record the update operation in change buffer
Reduce random disk reading, and improve performance significantly.
Index selection
1. There is not much difference in query performance between the ordinary index and the unique index. The main consideration is to update the performance. It is recommended to choose the general index.
2. It is recommended to close the change buffer scenario.
If all updates are followed by the query control parameter innodb_change_bufferingmysql > SHOW VARIABLES LIKE'% innodb_change_buffering%' for this record +-+-+ | Variable_name | Value | +-+-+ | innodb_change_buffering | all | +-+-+ # Valid Values (> = 5.5.4) none / inserts / deletes / changes / purges / all# Valid Values (
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.