May 13, 2010 at 6:11 am
We have a 500GB primary datafile that has over 140GB free space due to the deletion of archival data. I have heard pros and cons about using shrinkfile. I would like to get any opinions about using shrinkfile, and if anyone has a time estimate to shrink a file this size.
Thanks,
Keith
May 13, 2010 at 6:42 am
How long is a piece of string?
Seriously, there's no way, just from the size of the DB, to make any educated guess as to the time to shrink. Depends on IO subsystem, amount and distribution of free space, type of pages (LOB, data, row overflow), usage of server, etc.
Before you consider shrinking, is that space likely to be reused on a reasonable amount of time? If so, rather don't shrink. Free space won't make your backups large or slow down queries and, if you shrink, you'll need to rebuild all your indexes afterwards.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 13, 2010 at 6:53 am
Thanks for the quick response.
We would rather not shrink the file, as you said, all indexes would have to be rebuilt.
This is an older system, 32 bit Windows.
The reason we were considering it is the disk it is on will be filled up in the near future.
autoextend is set to 10%. We could reduce this so the file does not grow.
One other question, at what point does the datafile extend. Is there a way to see at what percent
the file usage gets to when it extends.
Keith
September 7, 2017 at 2:27 am
SELECT T.text,
R.Status,
R.Command,
DatabaseName = DB_NAME(R.database_id),
R.cpu_time,
R.total_elapsed_time,
R.percent_complete
FROM sys.dm_exec_requests R
CROSS APPLY sys.dm_exec_sql_text(R.sql_handle) T
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply