Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the reason for not shrinking database files

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "do not shrink the database file why", the content is easy to understand, clear, hope to help you solve the doubt, the following let the editor lead you to study and learn "do not shrink the database file why" this article.

With regard to shrinking data files, although I wrote the relevant shrinking data file code myself at Microsoft, I never had a chance to rewrite it to make it easier to operate. I really don't like shrinking.

Now, don't confuse shrinking transaction log files with shrinking data files when the growth of transaction log files gets out of control or in order to remove too many VLF fragments (see Kimberly-Clark's excellent article here and here), however, shrink transaction log data files should not be used frequently (a rare operation) and should not be part of your regular maintenance plan.

Shrinking data files should be performed even less. This is why-data file shrinkage has resulted in a large number of index fragments, let me demonstrate with a simple step that you can run. The following script will create a data file, create a 10MB-sized "filler" table, a 10MB-sized "production" clustered index, and then analyze the fragmentation of the newly clustered index.

USE [master]; GO IF DATABASEPROPERTYEX (NumbDBMaint 2008, NumberVersion') IS NOT NULL DROP DATABASE [DBMaint2008]; GO CREATE DATABASE DBMaint2008;GOUSE [DBMaint2008]; GO SET NOCOUNT ON;GO-- Create the 10MB filler table at the 'front' of the data fileCREATE TABLE [FillerTable] ([C1] INT IDENTITY, [c2] CHAR (8000) DEFAULT' filler'); GO-Fill up the filler tableINSERT INTO [FillerTable] DEFAULT VALUES GO 1280-Create the production table, which will be 'after' the filler table in the data fileCREATE TABLE [ProdTable] ([C1] INT IDENTITY, [c2] CHAR (8000) DEFAULT' production'); CREATE CLUSTERED INDEX [prod_cl] ON [ProdTable] ([C1]); GO INSERT INTO [ProdTable] DEFAULT VALUES GO 1280-Check the fragmentation of the production tableSELECT [avg_fragmentation_in_percent] FROM sys.dm_db_index_physical_stats (DB_ID (Noble DBMaint2008'), OBJECT_ID (Noble Table'), 1, NULL, 'LIMITED'); GO

The execution result is as follows

The logical fragmentation of the clustered index is approximately 0.4% before shrinking the data file. [but my test result is 0.54%, as shown in the figure above, but it is also close to 0.4%]

Now I delete the filter table and reanalyze the fragmentation of the clustered index after running the shrink data file command.

-- Drop the filler table, creating 10MB of free space at the 'front' of the data fileDROP TABLE [FillerTable]; GO-- Shrink the databaseDBCC SHRINKDATABASE ([DBMaint2008]); GO-- Check the index fragmentation againSELECT [avg_fragmentation_in_percent] FROM sys.dm_db_index_physical_stats (DB_ID (NaturDBMaint2008'), OBJECT_ID (NaturDBMain'), 1, NULL, 'LIMITED'); GO

The following is my implementation result, the author's implementation result, please see the original text:

Original text:

Wow! After the shrink, the logical fragmentation is almost 100%. The shrink operation * completely* fragmented the index, removing any chance of efficient range scans on it by ensuring the all range-scan readahead I/Os will be single-page I/Os.

Translation:

Wow, that's horrible! After the data file shrinks, the logical fragmentation of the index is almost 100%, and the shrinking data file leads to the complete fragmentation of the index. Eliminates any opportunity for valid range scanning about it, ensuring that all Imax O operations that perform read-ahead range scanning are performed on a single page.

What causes it? When a single data file shrinks once, it uses the GAM bitmap index to find the most allocated page in the data file, and then moves forward as much as possible to where the file can be moved, so that, in the above example, it completely reverses the clustered index from non-fragmented to fully fragmented.

The same code is used for DBCC SHRINKFILE, DBCC SHRINKDATABASE, and auto-shrinking, which are equally bad, just like index fragmentation, data file shrinkage also produces a large number of I _ Unip O operations, consumes a lot of CPU resources, and generates * load* transaction logs because any actions are all recorded.

Data file shrinkage should never be part of regular maintenance. You should never enable the autoshrink property. I tried to remove it from SQL 2005 and SQL 2008 products. The only reason it still exists is for better forward compatibility. Don't fall into the trap of creating a maintenance plan and rebuilding all indexes. Then try to reclaim the space consumed by re-indexing and shrink the data files-this is the zero-sum game that generates a lot of transaction logs but doesn't really improve performance.

So, why are you running a shrink,? For example, if you delete a fairly large database to a large percentage, the database is unlikely to grow, or do you need to empty the data file before transferring a database file?

Translation:

The methods I would like to recommend are as follows:

Create a new filegroup

Move all affected tables and indexes to a new filegroup with CREATE INDEX... A script for WITH (DROP_EXISTING=ON) that removes fragments from the table while moving the table.

Delete those old filegroups that you are going to shrink, and you have to shrink (or shrink it down the way if it's the main filegroup) anyway.

Basically you need to provide some more space before you can shrink the old files, but it is a clearer setting.

Original text:

The method I like to recommend is as follows:

Create a new filegroup

Move all affected tables and indexes into the new filegroup using the CREATE INDEX... WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time

Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)

Basically you need to provision some more space before you can shrink the old files, but it's a much cleaner mechanism.

If you did not choose to shrink the log file, please note that this operation will lead to index fragmentation, you should shrink the data file to take some steps to eliminate it may cause performance problems, the only way is to use DBCC INDEXDEFPAGE or ALTER INDEX... REORGANIZE to remove index fragments do not cause data file growth, these commands require the expansion of space 8KB pages instead of rebuilding a new index in the index rebuild operation.

Bottom line-try to avoid running data files at all costs to shrink

So, friends who are still using homework to regularly shrink data files or databases to turn on the "auto-shrink" attribute, please correct your misunderstandings in time!

The above is all the contents of the article "Why do not shrink database files?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report