April 27, 2009 at 3:37 pm
I don't shrink files very often. And typically use the GUI when I need to. But in my current situation I needed to shrink about 15-25GB from each of several 40-90GB files, and the GUI proved unacceptable. Searching for ideas, the only common advice I found was to shrink the files in small chunks iteratively, using a script.
So I've tried this. It works. But what I've noticed is that after some initial free space is removed (the low-hanging fruit), it takes just about as long to shrink a file by 10MB or 100MB as it does to shrink it by 10GB. In my case, typically about 6-7 hrs for each shrink operation.
So for me, there is no doubt that the best way is to shrink in very large chunks. If I ever have to do this again, I'm going to attempt to perform just a single shrink per file. For me, shrinking 100GB from several files, the difference between 100MB and 10GB chunks for me was 1-2 days vs. a few months of iterative shrink operations, enduring long disk queues the entire time.
I'm running on pretty decent hardware, EMC san, lots of disks...I don't think my experience is specific to any hardware deficiency. This server is reasonably busy most of the time--it's definitely not idle, so perhaps that plays into the decision?
Anyway, I suspect the advice I found recommending shrinking in small chunks was based on the observation that the first couple small chunks typically can be shrunk quite rapidly. But that is shortsighted.
This is not really a question, I'm just posting my observations. Does this jibe or conflict with anyone else's experience?
April 27, 2009 at 4:09 pm
Hi Mike,
Are you running out of disk space?
If not, this can be avoided, by allocating sufficient space. Calculate approx space required for next quarter or two quarters or so, based on by what percentage or size database grows?
Then allocate ample space and also if possible have file to grow by certain size not by percentage, this is the procedure we have implemented here and we have really control over disk space.
Frequent shrinking of files is not advised or suggested.
If you have disk space issue and if this needs to be done regularly, from my experience,I have never reclaimed space in chunks but I reclaimed space in one go, but this I used to do during when there is minimal or no activity also in coordination with application owners.
April 27, 2009 at 6:10 pm
Shrinking in smaller chunks seems almost like a cursor solution. I've never considered it and I can't think of a situation where I would consider it. Sure the first couple might run relatively quickly but as you've found it's going to bog down as it needs to move more and more used extents for each iteration.
The difference with doing it iteratively is that you're almost certainly going to be moving some extents multiple times, whereas doing it in one chunk should (theoretically) move each used extent no more than once.
April 28, 2009 at 8:41 am
Thanks for the replies. This was not meant to be a discussion of whether or not to shrink, or goodness of shrinking, just of how to best go about it. In my prior internet search I found maybe 10 different posts/articles all suggesting doing this in small chunks, and none suggesting the contrary. But in this forum/post it is 3 to 0 in favor of big chunks.
April 28, 2009 at 9:01 am
The problem with shrinking in large chucks is that everything is all or nothing. If you have to interrupt it, the file ends up the same size.
I have never seen a 100 MB shrink take 6-7 hours, so I would same something is very unusual about your situation. I have done many shrinks using a script to shrink the database files in small chunks in a loop, and never had the kind of problem you described.
I just ran a shrink last night to shrink 15 GB in 100 MB chunks from each of 10 databases to go from 18 GB down to 3 GB for a total of 150 GB of shrinks, and it finished in 2 hours, including running an index defrag and update statistics on the tables in the 10 databases (about 0.5 GB of data in each).
I usually use something like this script:
Shrink DB File by Increment to Target Free Space
April 28, 2009 at 10:00 am
Thanks, Michael. That script is the same one upon which I based my routine.
I ran it for a few weeks using 100MB chunks. And the very first couple chunks that I sat & watched ran reasonably quickly--this was the low hanging fruit. So I let it run. After not very long, though, it slowed down. Then, at the rate things were going, my total shrink operation was going to take a couple months, so I started experimenting with larger chunks. That's when I observed that they took the same time to complete as the smaller ones did. I wound up finishing it all in 2 more days.
My observations are from a 750 GB DB, actively in use, files on a LUN mounted on a decent SAN, spread across a couple hundred disks. We just deleted 100+ GB older data from couple tables, still 600+ GB remaining in the DB. When not shrinking files, avg disk queue length are 0.001 read, 0.05 write; when shrinking these values are 2-4 orders of magnitude higher.
Maybe difference between your speed & mine is that I still have 600GB left in this DB when shrink was done? Except perhaps for the 6hr shrinkfile duration, all other operations seem to take about the right amount of time in my DB, it performs well.
Before settling on 100MB chunk size, did you test other sizes?
Also, regarding the "all or nothing" nature of big chunks: seems to me that we don't lose progress made during a large shrink operation that is aborted. If you immediately follow a long-running-then-aborted shrinkfile with shrinkfile truncateonly, it will quickly free up whatever free space had been gathered before the initial attempt was aborted.
April 29, 2009 at 6:36 am
I tried this on a SQL 2005 database migrated from SQL 2000 and had this exact problem. I wanted to do small shrinks as the database in question is a heavily used production database 24x7 and I didn't want to start a big shrink of entirely unpredicatable duration.
This article explained why in my case. A specific set of circumstances causes it on a migrated database from 2000 to 2005.
Unfortunately I haven't found a way around it. Please let me know if you do!
Tim
.
April 29, 2009 at 9:20 am
Thanks for the link, Tim...that's a great article! It confirms some my suspicions (e.g. that progress is not lost when we abort a long-running shrinkfile). If nothing else, just confirming this one fact has made this whole post worthwhile. As the article says, if nothing else we can schedule long-running shrinkfile to run during period of low activity, killing it when time is up; progress is not lost, and eventually it will finish.
The article does not say that running shrinkfile truncateonly immediately after aborting a long shrinkfile will almost immediately cause the work done to-date to be freed to the file system. But I believe this is true, and now I always do this after aborting a shrinkfile.
My DB was migrated from SQL 2000 too, and contains LOBs, so the maybe this is why shrinkfile takes so long for me.
Also, for me the percent complete in the DMV is not helpful; it ramps to a value like 95 or 99% in seconds, then stays there for hours.
If you have not done this already, I think you might want to experiment with larger chunk sizes & see if the duration increases linearly or not. Run shrinkfile with truncateonly first to ensure tests are fair (sort of like clearing the cache). Then run shrinkfile with size 10X and 1X, compare durations...do they vary by 10x?
April 29, 2009 at 9:35 am
Thanks Mike, I'm glad the link was useful. I agree with your DMV experience, I got just the same effect.
One thing I didn't mention in the previous post is the article implies that the LOB restructure to 2005 only happens once and after that things will be fine. This isn't my experience and I suspect it does it every time you run it.
Therefore larger chunks are theoretically better and from my recollection this seems to be true in practice as well, although it's a while since I did it.
Good luck
Tim
.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply