November 3, 2015 at 2:14 am
after a huge data purging i've to shrink 1 datafile from 2tb to 100gb, but in this file there is an always accessed table
(insert new records mostly).
which is the best approach to perform this shrink without causing locks on table? i think about an step by step shrink
(eg from 2tb to 1,8tb ecc..)
what do u think?
November 3, 2015 at 6:46 am
The table(s) you purged all the data from, wouldn't it help if you dropped and recreated the clustered index(es)?
That way you are resizing the actual tables you purged and the other tables being accessed wont be affected as opposed to doing a database shrink, which would require you to reorganize or rebuild your indexes anyway?
I've never tried it, but would be interested in what others think.
hmmm.. although, it still wont resize your data file... will be interested to hear
November 3, 2015 at 7:34 am
there is only that table in this DF,
btw i've done the job
resize step by step without causing locks
I do all the maintenance tasks (rebuild nonclustered indexes and reorg pk (can't rebuild offline), and update table stats) after the shrink, seems it works well
November 3, 2015 at 4:26 pm
Was this on purge some form of audit table?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply