March 11, 2016 at 11:52 am
I am wondering if I can get some help with the script. I have a DB with more than 550+ data files (Of course they are all different sizes). I am trying to shrink all the files down to 80%. I have the script to generate the DBCC shrink command but I am still struggling with shrinking it to certain %. Can I please get some help?
Drop table #TEST
CREATE TABLE #TEST
(SQLQUERY VARCHAR (1000))
INSERT INTO #TEST SELECT 'USE [' + d.name + N']' + CHAR(13) + CHAR(10)
+ 'DBCC SHRINKFILE (N''' + mf.name + N''' 0)'+CHAR(13)+'GO'
FROM sys.master_files mf
JOIN sys.databases d
ON mf.database_id=d.database_id
WHERE d.database_id>4 and mf.type_desc = 'ROWS'
Select * from #TEST
March 11, 2016 at 1:27 pm
...I am trying to shrink all the data files down to 80%...
Warning. Incoming thrashing. Prepare for impact!
March 11, 2016 at 2:59 pm
I'm not sure what are you struggling with exactly. Do you know how to calculate percentage value?
March 11, 2016 at 3:29 pm
Chitown (3/11/2016)
I am wondering if I can get some help with the script. I have a DB with more than 550+ data files (Of course they are all different sizes). I am trying to shrink all the files down to 80%. I have the script to generate the DBCC shrink command but I am still struggling with shrinking it to certain %. Can I please get some help?
Drop table #TEST
CREATE TABLE #TEST
(SQLQUERY VARCHAR (1000))
INSERT INTO #TEST SELECT 'USE [' + d.name + N']' + CHAR(13) + CHAR(10)
+ 'DBCC SHRINKFILE (N''' + mf.name + N''' 0)'+CHAR(13)+'GO'
FROM sys.master_files mf
JOIN sys.databases d
ON mf.database_id=d.database_id
WHERE d.database_id>4 and mf.type_desc = 'ROWS'
Select * from #TEST
Is there actually any free space available in those files? I know you probably know but have to say it out loud... shrink doesn't do compression.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2016 at 3:37 pm
Have you considered limiting what you're going to do to truncate only?
DBCC SHRINKFILE (file_name , TRUNCATEONLY)
March 11, 2016 at 8:19 pm
I hope you're planning on doing some serious index maintenance after all this shrinking. Shrinking data files tends to fragment indexes to 99%, so it will be necessary.
March 12, 2016 at 9:52 am
Ed Wagner (3/11/2016)
I hope you're planning on doing some serious index maintenance after all this shrinking. Shrinking data files tends to fragment indexes to 99%, so it will be necessary.
And, as a reminder, when you do that, the free space in the file group will expand to at least the size of the largest index you have, which is usually the clustered index of the largest table you have. I say "at least" because there's usually also a 20% or so "extra charge" depending on where the sort is done and whether or not it's done in an ONLINE or OFFLINE fashion.
There is a "trick" that can be done for full reindexing of even the clustered indexes to remove virtually all unnecessary free space from file groups before making them READ_ONLY (particularly useful for partitioned temporal WORM tables) and could possibly done in this case but, although not difficult, it IS complicated.
Are such extraordinary measures warranted in this case? If the file groups aren't going to be set to READ_ONLY, then probably not as any index rebuild maintenance (not reorg) will cause the file group to grow for any index over 128 extents (8 mb) as will any index page splits.
Of course, the OP still hasn't responded as to whether or not there is actually any free space available in the file groups nor what "80%" actually means here.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 12, 2016 at 10:27 am
Jeff Moden (3/12/2016)
Ed Wagner (3/11/2016)
I hope you're planning on doing some serious index maintenance after all this shrinking. Shrinking data files tends to fragment indexes to 99%, so it will be necessary.And, as a reminder, when you do that, the free space in the file group will expand to at least the size of the largest index you have, which is usually the clustered index of the largest table you have. I say "at least" because there's usually also a 20% or so "extra charge" depending on where the sort is done and whether or not it's done in an ONLINE or OFFLINE fashion..
Heh - I had something similar originally, but I removed it before posting. The surcharge of rebuilds is data file space and the surcharge or reorganizations is transaction log space. If space is what the OP is trying to save, they're going to end up paying for it in the end.
Now, if that surcharge of space is levied during the index maintenance that occurs immediately after the shrink or after the users complain about horrible performance and index maintenance is done later is up the OP. Either way, the surcharge must be paid or the users will have to live with performance problems.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply