Disk Defragmentation - Priorities?

  • Hi,

    I am planning for physical disk defragmentations to my production server. The server consists of 3 drives ( G for MDFs,H for t-logs and I for NDFs ). Unfortunately the downtime approved only sufficient for me to defragment 1 physical disk. Means I have to pick the disk that will contribute most to performance. I'm planning to head on with my G drive first as it contains the mdf files.

    Is that the best approach or should I pick the most fragmented ones? Please advise.

    Thank you

    Below are the statistics reported from Disk Analyser:

    Volume Disk_G (G:

    Volume size = 209 GB

    Cluster size = 4 KB

    Used space = 59,694 MB

    Free space = 151 GB

    Percent free space = 72 %

    Volume fragmentation

    Total fragmentation = 29 %

    File fragmentation = 58 %

    Free space fragmentation = 0 %

    File fragmentation

    Total files = 379

    Average file size = 193 MB

    Total fragmented files = 38

    Total excess fragments = 596

    Average fragments per file = 2.57

    Pagefile fragmentation

    Pagefile size = 0 bytes

    Total fragments = 0

    Directory fragmentation

    Total directories = 148

    Fragmented directories = 4

    Excess directory fragments = 5

    Master File Table (MFT) fragmentation

    Total MFT size = 628 KB

    MFT record count = 546

    Percent MFT in use = 86 %

    Total MFT fragments = 2

    --------------------------------------------------------------------------------

    Volume Disk_H (H:

    Volume size = 74,959 MB

    Cluster size = 4 KB

    Used space = 8,957 MB

    Free space = 66,002 MB

    Percent free space = 88 %

    Volume fragmentation

    Total fragmentation = 44 %

    File fragmentation = 88 %

    Free space fragmentation = 0 %

    File fragmentation

    Total files = 22

    Average file size = 522 MB

    Total fragmented files = 6

    Total excess fragments = 36

    Average fragments per file = 2.63

    Pagefile fragmentation

    Pagefile size = 0 bytes

    Total fragments = 0

    Directory fragmentation

    Total directories = 8

    Fragmented directories = 1

    Excess directory fragments = 0

    Master File Table (MFT) fragmentation

    Total MFT size = 54 KB

    MFT record count = 40

    Percent MFT in use = 74 %

    Total MFT fragments = 2

    --------------------------------------------------------------------------------

    Volume Disk_I (I:

    Volume size = 61,436 MB

    Cluster size = 4 KB

    Used space = 20,110 MB

    Free space = 41,325 MB

    Percent free space = 67 %

    Volume fragmentation

    Total fragmentation = 31 %

    File fragmentation = 63 %

    Free space fragmentation = 0 %

    File fragmentation

    Total files = 24

    Average file size = 1,113 MB

    Total fragmented files = 3

    Total excess fragments = 9

    Average fragments per file = 1.37

    Pagefile fragmentation

    Pagefile size = 0 bytes

    Total fragments = 0

    Directory fragmentation

    Total directories = 9

    Fragmented directories = 1

    Excess directory fragments = 0

    Master File Table (MFT) fragmentation

    Total MFT size = 57 KB

    MFT record count = 48

    Percent MFT in use = 84 %

    Total MFT fragments = 2

  • If it were me then I'd probably go with the mdf files first, that's where the majority of the data is going to be.

    That's fine for OS file fragmentation but can I just check that you are also dealing with internal and external index fragmentation too (or to put it in english, are you rebuilding your indexes regularly)

    If not then you'd be better off rebuilding the indexes rather than defragmenting the disk.

  • Mike,

    yes I do. I have setup maintenance job to reindex regularly on all DBs. One question, even after Reindex usually the external fragmentation still remains.. will disk defragmentation solve this? Does disk defragmentation really contribute to performance?

    anyway thanks for your input

  • One question, even after Reindex usually the external fragmentation still remains.. will disk defragmentation solve this?

    No. External fragmentation means that the logical chain of pages requires unnecessary extent switches.

    Does disk defragmentation really contribute to performance?

    Yes, at least somewhat. However since the database needs to be offline to do it it is seldomly used, since the performance gain is normally not that big. If care is taken when setting up the database then it should not become a problem.

  • I'd agree with Chris, care in setting up the database in the first place can significantly reduce disk fragmentation. 

    I always defrag the disk on a new server before creating the database, and always try to ensure that there is enough empty space to last us between 6-12 months up front.

  • Chris,

    Can you please elaborate more on this:

    [External fragmentation means that the logical chain of pages requires unnecessary extent switches.]

    if external fragmentation is really high how do I fix this?

  • Do you have multiple files in the filegroup where the table/index is stored? If so then the output value from DBCC SHOWCONTIG for external fragmentation should not be paid too much attention to, since the algorithm apparently does not handle multiple files very well. Look at Scan Density to see if you have external fragmentation; 100% is the best possible value here. You remove both logical and external fragmentation by using one of the four possible ways to defrag indexes. See my article Fragmentation 101 for more info.

  • Are you planning on using the built in Windows defragmenter?  It may or may not handle your large DB files.  It wouldn't defrag the ones here and we had to get a third party program.  It seems to get confused when there isn't much contiguous free space even of there is a decent amount of unused capacity.  I recommend you have a contingency plan ready in case Windows defragmenter doesn't really do any noticable defragmenting.

  • Are the databases initially created with small sizes and are the growthfactor still at the default 10% ?

    If yes, then you should change that to some larger value, like 50 or 100 MB or some other appropriate 'chunk'. As mentioned earlier, sizing 'properly' at creation stage and setting the growth factor to some 'large-enough' value will in many cases render filelevel fragmentation a non-issue.

    Also, do not enable autoshrink, it will also contribute to unnecessary file fragmentation.

    /Kenneth

  • If you want to defragment the physical DB MDF or LDF file use sysinternals CONTIG program http://www.sysinternals.com/Utilities/Contig.html

    I usually schedule a SQL job for this during slow times, But frankly I've run it during the day and its pretty quick and I've heard no performance complaints.  Example code

    use master

    EXEC xp_cmdshell 'c:\contig\contig -v e:\database\*.mdf'

    go

    EXEC xp_cmdshell 'c:\contig\contig -v d:\logs\*.ldf'

    This defragments all mdf and ldf physically files.  Reindexing is more relevant to perfomance but a highly fragmented physical file can also contribute to performance issues

    Francis

  • thanks Chris for the link it's really helpful.

    And yes I do have multiple files in different filegroups so I guess can just focus on Scan Density.

  • Kenneth,

    Nope. The database was created with a sufficient amount of spaces which probably can last us for half a year.

  • fhanlon,

    cool.. I have been looking for this kind of tool... thanks!!

    will give that a try on this coming scheduled downtime

    Hope to see good results there...

  • Thanks for the valuable input guys...

    Really appreciate it

  • BlueIvy, did that program work? got a similar issue myself and windows defragger can't defrag the files.....

    _________________________________________________________________________________SQLGeordieWeb:- Jarrin ConsultancyBlog:- www.chrisjarrintaylor.co.ukTwitter:- @SQLGeordie

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply