June 5, 2014 at 12:27 pm
Hi all,
We have dropped a huge table from one of our databases and I want to get some of that space back. However, the shrink of the data file takes forever, and it does extremely high amounts of CPU cycles and reads. So I cancelled it and tried shrinking it just 5GB, same thing...I cancelled it after 8 hours and noticed high resource usage. So I tried shrinking it by 5MB and the same thing happened! I tired shrinking other databases on the same server by 5MB and the same thing happened with those as well. Does anyone know what is going on? I have never seen this behavior before.
June 5, 2014 at 11:50 pm
Shrinks SUCK!!! They will no matter what you try to do - unless you can do a truncate_only shrink and get back the space you need, but how often does THAT happen!?! :hehe:
More than a few times I have actually migrated the entire database to a new one to get around the severe issues you can have with shrink. Obviously this is an outage event but there are a few tricks you can do to minimize actual down time. Still a PITA though.
The BIG question is do you REALLY REALLY need that space back? Will the database grow large again and if so in how much time? Best is to leave things as they are.
Oh, and since you have run multiple shrinks already you MUST check your index fragmentation!! Even running it for short periods of time can cause issues with that.
Oh, one more thing - did you do a File IO stall and wait stats analysis while the defrag was running? Maybe your server/IO just can't keep up??
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 6, 2014 at 2:35 am
Yes, shrinking is not advisable, but I can see the need in some cases...
The performance of shrinking is depending on the allocation within the file itself. It's beyond my knowledge to inquiry this information, so when I needed to shrink (only occasionally) I start with small amounts. If it did run rather fast I increase the amount up till 5GB max. per step.
And I always performed the shrink on a specific file and never on the whole database.
Duration of some shrinks I've executed:
500GB databasefile down to 300GB (in steps of 5GB): 70+ hours
1500GB databasefile down to 750GB (in one giant step; the database was just compressed and not in use): 15 minutes
June 6, 2014 at 2:58 am
My approach is similar to Hansi. If you do have enough disk space and shrink is only 5 GB just let it be.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
June 6, 2014 at 8:32 am
shahgols (6/5/2014)
Hi all,We have dropped a huge table from one of our databases and I want to get some of that space back. However, the shrink of the data file takes forever, and it does extremely high amounts of CPU cycles and reads. So I cancelled it and tried shrinking it just 5GB, same thing...I cancelled it after 8 hours and noticed high resource usage. So I tried shrinking it by 5MB and the same thing happened! I tired shrinking other databases on the same server by 5MB and the same thing happened with those as well. Does anyone know what is going on? I have never seen this behavior before.
Are you trying during off peak hours? Shrinking is also a high I/O intensive activity. If you try after business hours, you may find that the same task will take less time to complete, depending of how busy the server is at that time.
By the way, be sure to defrag Indexes after that. Shrinking creates a lot of fragmentation inside the database. So it is a good idea to rebuild Indexes after that.
June 6, 2014 at 12:47 pm
Hi all, thanks for the replies. Yes, I am aware of the issues with shrinking, but this is one of those cases where if I don't shrink the database, 30GB of space will go to waste year after year. I ended up moving the data to a new database and switching databases.
I just found it odd that 5MB shrink would take hours. Specially on a database that has no write to it.
I checked wait stats, nothing alarming there. I didn't check the IO, didn't think moving 5MB would cause too much IO.
I just think something is not right, just don't know what and how to go about finding it.
June 6, 2014 at 2:26 pm
Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 6, 2014 at 3:21 pm
TheSQLGuru (6/6/2014)
Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:
I've found that, especially in small to midsized companies, 30GB can be a lot of wasted space and can mean the difference between something working or not. I've also found that the adage of "Disk space is cheap" isn't always true. "It Depends". 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
June 6, 2014 at 4:21 pm
Jeff Moden (6/6/2014)
TheSQLGuru (6/6/2014)
Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:I've found that, especially in small to midsized companies, 30GB can be a lot of wasted space and can mean the difference between something working or not. I've also found that the adage of "Disk space is cheap" isn't always true. "It Depends". 🙂
That's very true!
June 8, 2014 at 12:18 pm
I can just hear our storage guy shouting "YOU WANT AN EXTRA 30Gb???!!!" 🙂
'Only he who wanders finds new paths'
June 9, 2014 at 2:44 am
Yeah shrinks suck, we have the fun task of pulling all of our Image Blob data out to disk for a few clients coming up. For the most recent client this was around 500Gb out of their 570GB database. It's kind of pointless for us to do it without shrinking the data file to reclaim the space, and I might give some consideration to simply copying the database out to a new one after reading some of the replies here. It might save us a reasonable amount of time.
While it's not exactly related to shrinking (it took around 8 hours in the scenario above) we have found a way to reclaim the database space used by the blobs without churning large amounts of log and relying on GHOST CLEANUP.
Without posting our scripts, it basically goes like this:
- Replicate the source table structure in a new table
- Insert the contents of the source into the secondary table.
-In our case we NULL the blob data going into the second table.
- Drop the original table
- Rename the new table/keys
- Create index(es) on the new table
June 9, 2014 at 10:42 am
Now check your indexes for fragmentation.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
June 12, 2014 at 8:37 am
Backup the DB,
Change the DB recovery model to simple,
Alter the DB to single_user mode,
Shrink in 5 GB "chunks".
June 12, 2014 at 6:01 pm
TheSQLGuru (6/6/2014)
Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:
I had to fight the storage guys for 2 weeks to get 10GB for my tempdb drive. Ridiculous.
June 12, 2014 at 10:03 pm
For 30 GB no way shrink will take that long.:cool:
I suspect something is blocking the shrinking process to complete.
I would suggest check any blocking the shrink with below query..
select
t1.resource_type as [lock type],db_name(resource_database_id) as [database]
,t1.resource_associated_entity_id as [blk object],t1.request_mode as [lock req]
,t1.request_session_id as [waiter sid] ,t2.wait_duration_ms as [wait time],(select text from sys.dm_exec_requests as r
cross apply sys.dm_exec_sql_text(r.sql_handle)
where r.session_id = t1.request_session_id) as waiter_batch
,(select substring(qt.text,r.statement_start_offset/2,(case when r.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else r.statement_end_offset end - r.statement_start_offset)/2) from sys.dm_exec_requests as r
cross apply sys.dm_exec_sql_text(r.sql_handle) as qt
where r.session_id = t1.request_session_id) as waiter_stmt
,t2.blocking_session_id as [blocker sid] ,(select text from sys.sysprocesses as p
cross apply sys.dm_exec_sql_text(p.sql_handle)
where p.spid = t2.blocking_session_id) as blocker_stmt
from
sys.dm_tran_locks as t1,sys.dm_os_waiting_tasks as t2
where
t1.lock_owner_address = t2.resource_address
and also instead of doing database at shrink level; i would suggest do it file level.
Regards
Hema.,
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply