February 15, 2011 at 4:32 pm
OK, so I'm certainly no expert, and this has me stumped...
One of our SQL Server databases consists of multiple file groups (each containing a single .DB file) spread across multiple drives. One DB file, in particular, is considerably larger than the rest. This is the way it was when I "inherited" the DB, and we've lived with it thus far.
Anyhow, the DB file on this drive was hovering around 600gb in size. During a reindex last weekend, we ran out of space (for various reasons) and the reindex failed. That's fine, we can live without that for a while, but the size of this DB file ballooned to 1.2TB.
The disk usage report shows that indexes are now taking up over 80% of our total space.
So, it seems that the reindex process was using up A LOT of space and, since it failed, it wasn't able to reallocate or "free up" that space.
How can I go about fixing this? Any ideas? Thanks in advance.
February 16, 2011 at 11:11 am
What recovery model is the database in and how frequently do you take transaction log backups?
Is possible the reindex casued the log to grow and your are not taking log backups which means the space in the log cannot be reused?
Gethyn Elliswww.gethynellis.com
February 17, 2011 at 5:14 am
1) someone added a bunch of indexes? Top suspect there is using Database Tuning Advisor. BAD thing that, at least in the hands of the unknowing!!
2) when indexes are rebuilt, they take up extra space in the database for sorting and creating. The sort part does get freed and can be reused by other objects after the fact, but the growth would be there
3) someone had the index maintenance use a very low fill factor
You mention indexes using 80% of space - how did you determine that? If it is indeed true, then I vote for 1 or 3 above.
Unless you are a very skilled DBA you might want to get some help digging into what is going on here.
Search the web (sqlskills.com I think) for sp_helpindex. Also use DMVs to check for index usage. Could be things you can eliminate.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 17, 2011 at 9:06 am
Er, uh... well, this is embarrassing.
So, looking back through the settings on the index rebuild maintenance task, I see that "Change free space per page percentage to:" is set at 90%.
Of course, that amount of free space in the indexes for a DB this large is certainly going to cause problems. We want a fillfactor of 90%, not 90% of free space per page.
My understanding is that "Change free space per page percentage to" is the exact opposite of fillfactor... so, we were completely off when the index rebuild ran. Understandably, going from a fillfactor of 90% to only 10% is gonna cause some issues - and we ran outta space.
So, I think I found our problem. The only question now is how to find the cleanest/fastest way to fix this...
Anyone see a problem with just performing a shrinkfile, then dropping the indexes and just rerunning the rebuild index job with the correct setting?
Thanks again.
February 17, 2011 at 10:41 am
sickpup (2/17/2011)
Er, uh... well, this is embarrassing.So, looking back through the settings on the index rebuild maintenance task, I see that "Change free space per page percentage to:" is set at 90%.
Of course, that amount of free space in the indexes for a DB this large is certainly going to cause problems. We want a fillfactor of 90%, not 90% of free space per page.
My understanding is that "Change free space per page percentage to" is the exact opposite of fillfactor... so, we were completely off when the index rebuild ran. Understandably, going from a fillfactor of 90% to only 10% is gonna cause some issues - and we ran outta space.
So, I think I found our problem. The only question now is how to find the cleanest/fastest way to fix this...
Anyone see a problem with just performing a shrinkfile, then dropping the indexes and just rerunning the rebuild index job with the correct setting?
Thanks again.
So my #3 was the winner, eh? :hehe:
If you really do need the space (quite likely) and can stand the downtime, I would drop all NC indexes, shrink the file BUT LEAVE ENOUGH SPACE FOR EVERYTHING INCLUDING GOOD-FILLFACTOR INDEXES AND FUTURE GROWTH. Then rebuild all the clustered indexes, then recreate the NC indexes (both with proper fillfactors).
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 17, 2011 at 11:19 am
TheSQLGuru (2/17/2011)
So my #3 was the winner, eh? :hehe:If you really do need the space (quite likely) and can stand the downtime, I would drop all NC indexes, shrink the file BUT LEAVE ENOUGH SPACE FOR EVERYTHING INCLUDING GOOD-FILLFACTOR INDEXES AND FUTURE GROWTH. Then rebuild all the clustered indexes, then recreate the NC indexes (both with proper fillfactors).
Yep, spot on. Or, at least that's how it looks right now.
We were able to free up a little space, which should give us enough breathing room to make it to the weekend - when we can have extended downtime.
One other question, while I've got your attention 😉
What's the implication of simply shrinking the DB file down as far as it will go? Since it won't ever shrink to the point of causing data loss, what is preventing me from running, say, DBCC SHRINKFILE (15, 1); (i.e. - telling SQL to shrink the file down to 1mb, knowing that it will only go as small as possible without causing data loss)?
Then, assuming the auto-growth settings are good, we could just let it grow as needed...?
Thanks!
February 17, 2011 at 12:28 pm
Shrinkfile is a bad option on production data files. It will fragment your indexes to heck and back. See this article.
February 17, 2011 at 1:54 pm
It's best to size your databases accordingly allowing for future growth.
Shrink databases will causes massive fragmentation in your indexes, which in turn can cause bad performance.
TheSQLGuru's technique of dropping all clustered indexes, shrinking appropriately, then rebuilding all clustered indexes, followed by re-creating all NC indexes. Will allow the shrink but remove fragmentation after it. You unlikely to want to do this that regularly though.
re autogrow it is generally best to use that as a backup. Have it turned it just in case but you should monitor and size your databases accordingly. I wrote a blog on this a while back here http://www.gethynellis.com/2010/12/should-i-have-autogrowth-enabled-on-my.html and Paul Randal over at sqlskills.com has lots to say on the potential down-falls of shrinking databases regularly
Gethyn Elliswww.gethynellis.com
February 17, 2011 at 2:48 pm
There are MANY reasons to not use autogrowth to manage your file sizes. quickie list (NOT inclusive at all!!): delays for initialization if you aren't on instant file initialization mode (and note that the tlog is ALWAYS zeroed out), os disk file fragmentation, internal index fragmentation, when you do index maintenance again the first time - BOOM, lots of growths even with proper fill factor, index defrag jobs are not as effective, etc.
Just don't do it. I recommend to my clients to size the database to be big enough to contain 12-18 MONTHS of data addition including index sizes. Reevaluate every 3 months and after any exceptional events like "we just doubled our data because we added a new big client".
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 17, 2011 at 4:29 pm
Great info - thanks all. I knew shrinkfile would cause fragmentation... just didn't know how bad it would be.
So, here's the gameplan:
Identify the NC indexes on the affected DB Spaces and script their creation.
Drop the identified NC indexes.
Shrink the DB file to the correct size (including space for future growth).
Rebuild the associated clustered indexes (with correct fillfactor).
Recreate the associated NC indexes (with correct fillfactor).
Buy everyone a round. 🙂
I guess my last question is - I'm targeting the indexes on the offending DB file, but should I drop/recreate/rebuild all the indexes in the DB? The errant Reindex process was targeting them all but, oddly enough, this one particular DB file is the only one that had noticeable growth.
The scheduled reindex job has been corrected, so it'll correct the fillfactor the next time it runs. And, since the other DB files didn't really explode in size, there's no need to worry about space limits or shrinkfile commands...
Just wondering if I'm cool with limiting my vengence to the single DB file and leaving the others alone.
Thanks.
February 17, 2011 at 7:29 pm
You can check the fragmentation and 'page fullness' using the index physical stats DMV. You might want to do that on the other file groups just to be thorough. Could be that this file group blew up before much was done on the others?
Good luck with the mulligan! 😉
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 18, 2011 at 7:02 am
One other thing to remember is that Shrink process is single-threaded, and is incredibly slow, especially for a database at 600GB.
Before running a full shrink, you may want to try a SHRINKFILE with a TRUNCATEONLY: DBCC SHRINKFILE ({FileName}[, {FileSize}] [, TRUNCATEONLY])
You can omit the FileSize, if you don't specify the FileSize, SHRINKFILE will always try to shrink down to the file size set at creation. This option will probably not free up enough space, but it doesn't move data around, so it runs very much faster than a regular SHRINKFILE (without the TRUNCATEONLY option), so it doesn't fragment data and it doesn't use up your whole weekend waiting for it to finish.
Note - this TRUNCATEONLY is not the option that is being deprecated in SQL 2008+ - thats the BACKUP LOG TRUNCATEONLY option
February 21, 2011 at 9:02 am
Just a quick note to say that everything went smoothly over the weekend. I identified and dropped the nc indexes on the bad DB files. Then, ran shrinkfile, the index rebuild and then recreated the nc indexes.
The process freed up about 650gb and the full db backup ran to completion last night (the first in over a week).
Thanks to everyone that offered advice and suggestions.
February 22, 2011 at 6:38 am
sickpup (2/21/2011)
Just a quick note to say that everything went smoothly over the weekend. I identified and dropped the nc indexes on the bad DB files. Then, ran shrinkfile, the index rebuild and then recreated the nc indexes.The process freed up about 650gb and the full db backup ran to completion last night (the first in over a week).
Thanks to everyone that offered advice and suggestions.
GREAT news! It is really rewarding when people post up success stories like this. Let's hope it is a very long time before you need to start another "OMG..." thread! 😎
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 9, 2015 at 9:52 am
I have run into the exact same issue as the original poster. I believe his method to fix this issue would work for me as well but I could use some help with the actual scripts that I would use to complete the steps of his solution. Starting with the Drop Index scripts. Any help would be greatly appreciated. Thank you. -Jeremy
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply