October 2, 2009 at 7:29 pm
I just found out that my mdf file is fragmented at a windows level 6000 times. (I checked defrag.exe and ran the report). This is due to Autogrowth which was 5MB for a long time and now the DB is 200 GB; now Autogrowth is 10%.
How does this affect the data (external & internal) fragmentation inside the tables and indexes? Does the above situation limit the efficiency of Alter Index Rebuild (and Reorganize)?
How can I fix it? I would like to know that there is a way to fix the File Fragmentation, and then go with a proper Autogrowth setting to have a clean file. And once I Can do that, I can implement it for the log, tempdb also.
Thanks for any direction on this
Dan
October 2, 2009 at 8:14 pm
First, for a 200GB file I would not use a percentage for auto growth - I would set it to a fixed size. I would probably set it at 5000MB - but that really depends on your hardware and whether or not you have enabled instant initialization.
Second, I would not rely on autogrowth to actually grow the file. I would monitor the usage and schedule a growth before it actually got to that point. I would also make sure I always had enough space available in the data file to reindex my largest table and also provide several months of growth.
Finally, the most efficient way of defragmenting a database file that I have found is to perform a backup/restore. The restore will actually lay down a brand new file sized at the full size of the database. However, with a 200GB database I am not sure you have the time available to do that. Diskeeper says it will defrag SQL Server database files - but I have never used it myself. Other tools also say they can defrag the database files while they are online - but most people recommend shutting SQL Server down to defrag the files.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 2, 2009 at 9:12 pm
I've never found a good online solution and always shut down the SQL server and run the disk defrag utility. As already stated, configure a reasonable autogrowth setting based on a fixed amount, and always try to auto-grow files manually instead of relying on auto-grow.
October 3, 2009 at 10:06 am
Jeff,
instant initialization is set up. So I can afford to go with a bigger number for autogrowth. But actually, If I have space (I have to check on this), maybe I should allocate 300GB for now, so it can grow for another 100 GB without any AutoGrowth, right?
How can I know if I have enough space in the data file to reindex my largest tables? Are you talking about the unused space in the data file. Right now, the Data is 110GB, Indexes 71GB, and rest (<20GB) unused. The largest table is 26GB.
Regarding the backup/restore you gave me, am I understanding right that If I back up the same database we are talking about (200GB) and restore it right over it, it will try to get 200GB of contiguous space from Windows and put it as one straight file, instead of 6000 broken pieces, like what I have now?
You had said "However, with a 200GB database I am not sure you have the time available to do that.", we can take several hours in the weekend to do whatever has to be done to get this right.
If I can afford the time to do Backup/Restore, perhaps, first I should run the defrag utility on the disks (after bringing down SQL), and then run the backup/restore, right?
thank you
Dan
October 3, 2009 at 10:08 am
George, Yes, that is the biggest lesson after seeing that the data files is in 6000 distinct pieces: we will use sensible autogrowth or just allocate a bunch of space to begin with.
and if it is a good idea to run the defrag utility, we will bring SQL down (we can afford the time on weekends). I also liked Jeff's idea of backup/restore.
thank you for your input
Dan
October 3, 2009 at 1:22 pm
Dan, yes you understand correctly. If you have the space available and expect to continue growing, there is no reason not to extend the database to 300GB.
Yes, with the largest table at 26GB - you would need that much space available in the data file to rebuild the indexes. 20GB of available space in the data file is only 10% free - most of the time we want to see at least 20%, or 6 months growth - or some other reasonable amount of free space available.
And, during a backup/restore - the restore will lay down a new file after removing the existing file for that database. So, yes - it will create the file as contiguously as possible on that volume. There is no reason to defrag the volume unless there are other files on the volume that need to be defragged also.
I would be concerned with how long the process takes, because backing up a 200GB database is going to take some time - as well as restoring it. It is really going to depend on where the backup is located (local/remote) and how fast the drives are. I can't answer that for you, but you should have an idea how long it is going to take.
I will recommend that you look at Litespeed, Redgates backup utility or Idera's. Using one of those will compress the backup, speed up the backup as well as speed up the restore. Working on very fast SAN drives - I recently restored a 450GB database in less than 2 hours using Litespeed.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 3, 2009 at 3:27 pm
Jeff
Our 200 GB DB would take around 1.5 hours to backup & Restore. So I will work on scheduling that. The volume that this is on, says is 91% fragmented. So, would you think a one time volume defragmentation will help before I do the backup/restore.
And what is the solution for my inability of to do efficient re-indexing? (You had recommended at least 20% space requirement, that is at least 40GB for my DB, which I do not have). Would increasing the DB's size from 200GB to 300GB (if I can confirm that I can have that extra space), fix that problem for me? Then would the DB become 110GB for Data, 71GB for Indexes and the rest 120GB as unused space?
Appreciate your input
Dan.
October 3, 2009 at 3:39 pm
What other files reside on that volume? If there are other files - are they other database files or are they something else? If something else, you should really move them somewhere else.
Whether or not running your defrag is going to help is dependent upon what files are fragmented. If it is just the mdf file - then you shouldn't need a defrag, and then a restore. One or the other will take care of the mdf/ldf files for that database.
On your final question - yes, if you can get more space for the data file - then you want to allocate as much space as you can. But, you also don't want to over allocate - but not because of any performance issues. If you over allocate, you then are just wasting the storage.
If you monitor the amount of space used, you can then determine the average rate of growth. Let's say the average rate of growth on your system is 100MB per day. That would equate to ~2.5GB of growth per month for a total of 15GB for 6 months. So, we would need 15GB + 26GB (your largest table) for a total of 31GB, rounded up to 35000MB (I size in whole MB sizes for my databases).
If you know you are going to just continue growing (no archives/purges, etc...) then add the 100GB if you can get it. If you can't get it now, plan for it in the next six months - and work with your boss to purchase what is going to be needed in the following six months.
Once you have that, you then monitor on a monthly basis and valid your growth estimates. If you are correct and still in range of the target you set, all is good. If not, adjust your estimates up or down and plan accordingly.
Good Luck...
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 3, 2009 at 7:17 pm
Jeff
Thanks with your thoughts on Capacity Planning.
The drive only has all the mdf files of the several databases of the server. So just doing a backup/restore of all the databases will fix the Windows File Defragmentation within the files, right? The Log is in another drive, though. I assume the backup/restore fix that also?
I will try for 100GB extra, but as you estimated, even if I get 41GB, I should be ok! The test will be to reindex and if the % of fragmentation comes down, I know it is a success!
Is there any technical reason to size in whole MB sizes for the databases, or is it just a preference?
One thing I have learned today from you is to look for and have some kind of a job to run to store the DB size growth on a daily basis, for a few months/years, for capacity planning sake. Better to start it now..
once again, thank you
Dan
October 3, 2009 at 7:28 pm
Yes, if you backup/restore all databases - the files associated with those databases will be defragmented.
No technical reason for sizing in MB's - just preference. For me, it just is easier to size as:
500MB
5000MB
50000MB
Instead of:
512MB = .5GB
1024MB = 1GB
5196MB = 5GB
The sizing works better because the tools all size in MB - not GB (well, a few things come back as KB).
Beware that not all indexes will always be defragged 100% - that is going to depend on the size and any index that does not have more than 1000 pages won't see any benefit to re-indexing anyways.
And finally, you can review the backup size history to see how large the databases are for capacity planning. It's not exact, but then again - we don't need to be exact for planning. All we need to do is be close enough to trend the growth rate.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 3, 2009 at 8:44 pm
Jeff
Is the formula [at least 20% and/or 6 months growth] for free file space needed- Is it from your experience, or a formula from MS?
Also if the defrag happens even for 80% of the data/index, I am more than happy - does not have to be close to 100%, all the time.
Thanks
Dan
October 3, 2009 at 9:20 pm
The amount of free space is from experience only. Nothing specific from Microsoft - but 10 years of practical experience has shown that being able to give management 6 months notice on what we need to purchase is enough time.
It all depends on the system and how much storage you have available. It also depends on how often you actually want to grow the file, and how often you want to review the situation.
Also, you have to be aware of anything new happening - and be prepared to monitor more often when that happens. For example, this weekend we are implementing a new module on one of the systems I support. For the next couple weeks/months I will have to monitor this system much closer and see how this new activity is affecting our growth rate.
You cannot always follow a 20%/6 month rule - sometimes it's much less, sometimes more. The rule is to trend the growth and make sure you account for it without allowing autogrowth to kick in. If autogrowth does kick in - make it large enough so it doesn't happen a lot and gives you time to recover.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 3, 2009 at 9:48 pm
Jeff
Do you always beat the Autogrowth before it kicks in (If you suspect from your monitoring that it is going to happen in the next 2 weeks, do you go and manually change the database size using the Alter Database command, to extend it for a few months or as much as you can?)
Dan
October 4, 2009 at 5:47 am
If the data and index files are separated (mdf and ndf), does it matter where the amount of free space is required/needed?
October 4, 2009 at 9:01 am
repent_kog_is_near (10/3/2009)
JeffDo you always beat the Autogrowth before it kicks in (If you suspect from your monitoring that it is going to happen in the next 2 weeks, do you go and manually change the database size using the Alter Database command, to extend it for a few months or as much as you can?)
Dan
Yes - the autogrowth is only set in case I can't get to it in time.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply