Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to lose weight for database transaction log

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to lose weight for the database transaction log". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought. Let's study and learn how to lose weight for the database transaction log.

In most SQL Server working environments, especially in OLTP environments, when there is a bottleneck in the transaction log performance of the database, it often takes more time to complete the transaction. At this time, many people attribute the reason to the Imax O subsystem. The reason is that it can not support a large number of transaction logs generated by the workload, but this may not be the case.

Transaction log write wait time

For transaction logs, the write wait time can be monitored using sys.dm_id_virtual_file_stats and the event writelog wait in the system. If the write wait time is higher than you expect, then the Iwhite O subsystem will not be able to support it, which is a general assumption, but it does not mean that you need to upgrade your Ithumb O subsystem.

In many systems, you will find that there is a considerable proportion of redundant log records. If you can reduce these unnecessary log records, it will correspondingly reduce the number of transaction logs written to disk. Accordingly, it translates into a reduction in write wait time, thus reducing the transaction completion time.

There are two main reasons for redundant logging:

Unused nonclustered indexes

An increase in index fragmentation

Unused index

Whenever a record is inserted into a table, a record is also inserted in each noncluster index defined on the table (note that filetered index may be an exception), which means that extra log records are generated; the same is true for deleting records in the table, the corresponding records in noncluster index must be deleted, and updating data will also modify the records in noncluster index. These operations are necessary to maintain the correct relationship (true reflection) between each noncluster index and related tables, but if noncluster index is not necessarily used in the query plan, but the operations and logging records generated by them will also be unnecessary to maintain them, as the noncluster index fragments grow, they need to be maintained regularly, and maintenance will also generate more logging records is not needed at all.

The unused index may be that you mistakenly created an index on the table, or it was created according to the DMV recommendation of SQL Server's missing index, or it was created according to the database optimization consultant, or the business change may cause the previously used index to no longer be used.

In any case, these unused indexes should be cleared to reduce the load. The first step is to determine which indexes are unused, which can be viewed through the sys.dm_db_index_usage_stats DMV.

Index fragmentation

In the view of many people, index fragmentation will lead to the need to read more data pages, in fact, index fragmentation will also lead to the generation of redundant log records, which is due to the cause of fragmentation.

Fragmentation is caused by the phenomenon of page splitting page split. The simple explanation is that page splitting occurs when a record is inserted and there is not enough space. This process goes like this:

A new index is assigned and formatted

Move half of the records from a page full of data to a new page

The new page is linked to the index structure

New records are inserted into the page

All of these operations result in logging, as you can imagine, far more than you insert a record.

The step of reducing the extra cost is to remove unused indexes in order to prevent them from generating page splits, so to find those indexes that have been fragmented, the second step to determine which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in all indexes of a particular index, table, or indexed view, all indexes in the database, or all indexes in all databases. For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition. The algorithm for calculating fragments in SQL Server 2005 is more accurate than that in SQL Server 2000. As a result, the fragment value appears higher. For example, in SQL Server 2000, if page 11 and page 13 of a table are in the same zone, but page 12 is not in that zone, the table will not be treated as fragmented. However, to access these two pages, two physical I _ hand O operations are required, so in SQL Server 2005, this table is counted as fragmented. Use the index fill factor to rebuild or reorganize the index to reserve some empty space in the index for subsequent inserted records, which reduces the occurrence of page splits and therefore the generation of additional logging records. (please refer to another article: finding unused database indexes)

Of course, there is no such thing as a free lunch, and anything that is good for one party may be harmful to the other. When using fill factor fillfactors will reduce page density, too low page density will also bring some performance problems, of course, too high will bring page split, so this is a problem that needs to be weighed, specific reference to your environment, such as OLTP or OLAP and so on.

Reducing the write wait time of transaction logs does not always have to upgrade your Imax O subsystem. Using simple index analysis in the database can significantly reduce the generation of a large number of transaction log records, as well as write wait time.

Of course, this is only one aspect that affects transaction log performance, and only if you have a deeper understanding of the mechanism of transaction log will you discover more aspects of transaction log performance problems.

Thank you for your reading, the above is the content of "how to lose weight for the database transaction log". After the study of this article, I believe you have a deeper understanding of how to lose weight for the database transaction log. Specific use also needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report