March 23, 2017 at 1:47 pm
Evening All,
Is dbcc shrinkfile() cumulative? By that I mean if it takes 50 hours to reclaim 1TB of unused space in the DB, and at 25 hours I press stop.. Will it stop having reclaimed 500GB or will it roll back the shrink, to leave 1TB still?
I need to reclaim lots of space that was eaten up by accident. I am aware if the fragmentation side effect. There isnt enough downtime to complete this in a single move.
cheers
Alex
March 23, 2017 at 2:01 pm
alex.sqldba - Thursday, March 23, 2017 1:47 PMEvening All,Is dbcc shrinkfile() cumulative? By that I mean if it takes 50 hours to reclaim 1TB of unused space in the DB, and at 25 hours I press stop.. Will it stop having reclaimed 500GB or will it roll back the shrink, to leave 1TB still?
I need to reclaim lots of space that was eaten up by accident. I am aware if the fragmentation side effect. There isnt enough downtime to complete this in a single move.
cheers
Alex
Just shrink it by smaller increments - it's less disruptive to do it that way.
Sue
March 23, 2017 at 2:32 pm
To answer the question you asked....I have found that if I stop the DBCC SHRINKFILE before it ends, it doesn't do anything....I still have the same amount of free space as I did before I started. It appears to be all or nothing.
-SQLBill
March 23, 2017 at 3:55 pm
Sue_H - Thursday, March 23, 2017 2:01 PMalex.sqldba - Thursday, March 23, 2017 1:47 PMEvening All,Is dbcc shrinkfile() cumulative? By that I mean if it takes 50 hours to reclaim 1TB of unused space in the DB, and at 25 hours I press stop.. Will it stop having reclaimed 500GB or will it roll back the shrink, to leave 1TB still?
I need to reclaim lots of space that was eaten up by accident. I am aware if the fragmentation side effect. There isnt enough downtime to complete this in a single move.
cheers
AlexJust shrink it by smaller increments - it's less disruptive to do it that way.
Sue
Genius Sue. Genius! Or I am a fool.
or Both.
March 23, 2017 at 4:14 pm
alex.sqldba - Thursday, March 23, 2017 3:55 PMGenius Sue. Genius! Or I am a fool.
or Both.
Ha - only a fool if you think I'm genius. 🙂
As Bill said, it will just be the same size if you kill it. But in most respects it really doesn't matter as you just don't want something like that running for a long time - it does start causing problems with locking, blocking, lots of I/O, logging, etc.
Sue
March 23, 2017 at 6:07 pm
Sue_H - Thursday, March 23, 2017 4:14 PMalex.sqldba - Thursday, March 23, 2017 3:55 PMGenius Sue. Genius! Or I am a fool.
or Both.Ha - only a fool if you think I'm genius. 🙂
As Bill said, it will just be the same size if you kill it. But in most respects it really doesn't matter as you just don't want something like that running for a long time - it does start causing problems with locking, blocking, lots of I/O, logging, etc.Sue
Yeah, I get you what you mean -- we manage about 15-20 hours downtime over a weekend but not enough to shrink it that much plus we also need the space back! I will try nibbling away at it, perhaps in a loop... so worst case when I stop the loop i'll lose the last iteration.
March 23, 2017 at 6:32 pm
Something like this.
use databasename -- this script should run against required database
-- these two should be set as required
declare @shrink_target int = 10000 -- adapt as required - but sql below will set it to max(shrink_target, current space used + 1%)
declare @db_file_name sysname = N'filename' -- enter database logical file name to shrink
-- not required to change the ones below, but the shrink_step may behave better with higher values depending on the size of database
-- should be tried in dev before setting a particular value to run in Prod
declare @sql nvarchar(600)
declare @shrink_step int = 5000 -- size in MB to reduce - try 10000, 15000, 20000, 30000
declare @current_size int
select @current_size = floor(size / 128.0)
, @shrink_target = case
when floor((t.used_size / 128.0 + .9) * 1.01) > @shrink_target
then floor((t.used_size / 128.0 + .9) * 1.01)
else @shrink_target
end
from sys.database_files
outer apply (select sum(a.used_pages) as used_size
from sys.partitions p
inner join sys.allocation_units a
on a.container_id = p.partition_id) t
where name = @db_file_name
-- loop until estimated current size - shrink step is less than target size
while (@current_size - @shrink_step) > @shrink_target
begin
set @current_size = @current_size - @shrink_step
set @sql = 'dbcc shrinkfile (N''' + @db_file_name + ''', ' + convert(varchar(20), @current_size) + ') with no_infomsgs'
print @sql
exec sp_executesql @sql
end
-- execute a last time if current size is still bigger than target size
if (@current_size > @shrink_target)
begin
set @current_size = @shrink_target
set @sql = 'dbcc shrinkfile (N''' + @db_file_name + ''', ' + convert(varchar(20), @current_size) + ') with no_infomsgs'
print @sql
exec sp_executesql @sql
end
Edit: remove usage of fileproperty(name, 'SpaceUsed') as it returns incorrect values in many situations. Replaced instead with values from sys.allocation_units
March 23, 2017 at 11:50 pm
frederico_fonseca - Thursday, March 23, 2017 6:32 PMSomething like this.
declare @sql nvarchar(300)
declare @shrink_step int = 20000 -- size in MB to reduce
declare @shrink_target int = 150000 -- adapt as required - but sql below will set it to current space used + 10%
declare @db_file_name sysname = N'test'
declare @current_size intselect @current_size = convert(float, size) * (8192.0 / 1048576)
, @shrink_target =
case
when floor((convert(float, fileproperty(name, 'SpaceUsed')) * (8192.0 / 1048576) + .9) * 1.1) > @shrink_target
then floor((convert(float, fileproperty(name, 'SpaceUsed')) * (8192.0 / 1048576) + .9) * 1.1)
else @shrink_target
endfrom sysfiles
where name = @db_file_name--select @current_size
-- , @shrink_targetwhile (@current_size - @shrink_step) > @shrink_target
begin
set @current_size = @current_size - @shrink_step
set @sql = 'dbcc shrinkfile (N''' + @db_file_name + ''', ' + convert(varchar(20), @current_size) + ') with no_infomsgs'
print @sql
exec sp_executesql @sql
end
Just a "simplicity" tip... instead of doing this...* (8192.0 / 1048576)
... to convert pages to MB, just divide by 128 or 128.0.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 24, 2017 at 2:47 am
Aw nice one. Cheers guys. That was pleasing to wake up to find someone had written it for me!
Now come to London and collect your beers 🙂
March 24, 2017 at 3:00 am
alex.sqldba - Friday, March 24, 2017 2:47 AMAw nice one. Cheers guys. That was pleasing to wake up to find someone had written it for me!Now come to London and collect your beers 🙂
Pop up to Dublin - Guinness better here 🙂
March 24, 2017 at 4:53 am
frederico_fonseca - Friday, March 24, 2017 3:00 AMalex.sqldba - Friday, March 24, 2017 2:47 AMAw nice one. Cheers guys. That was pleasing to wake up to find someone had written it for me!Now come to London and collect your beers 🙂
Pop up to Dublin - Guinness better here 🙂
bit on the expensive side though isnt it
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply