Windows defragmentation

  • Hi everyone,

    How does Windows defragmentation work and does it help SQL with performance?

    Has anyone used any good defragmentation tools that not only did the job right, but also gave a boost to SQL?

  • Defragmenting windows is a lot like gluing a vase back together again.  A lot of work and if done right nobody notices that there is a hole in the back (until they add water).....

    Seriously when you defrag windows it physically moves files around (that it can) and makes them contiguous (all together).  This makes the O/S perform better because it has less seek time to find all the parts.

    Now whether or not this will even WORK for SQL is another matter.  Honestly, I don't think you will notice a difference with your data files... I don't think SQL will let you push the MDF/LDF around....

     



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • If you have the opurtunity, I can't see where it would hurt.

    But, make a backup first, and you have to stop SQL Server, at least close the DB. But if you stop SQL Server it would probably do a better job, as Defrag can't touch open files.

    But that's what I meant by, IF you have the opurtunity, our maintenance window s limited. and very off hours.


    KlK

  • I think your best bet would be  to rebuild your indexes after you perform the de-frag.

  • Thank you everyone for your input. I am buying a third party defragmentation tool. I defraged one of my servers the other day and my users reported an increase in performance. So hopefully a cutting edge tool will do a better job and give the same result.

  • As long as teh MDF and LDF files are contiguous, SQL will go somewhat better.

    They may not be due to autogrow/autoshrink

  • Autogrow in small increments will cause major fragmentation and interleaving of your SQL files if you have multiple DBs growing.  Shrinking log files and letting them grow back routinely will cause issues also.  I have found that seperating the bakcup files to their own disk is one of the best things you can do to help with stopping fragmentation on the data drives if you routinely keep them on the same drive. 

    The most dramtic experience I had with fragmentation was a problem that just couldn't be explained - one server that was just having IO issues and running slow, CPU doing nothing but IO so slow you could read it as it went by(if you could visualize it)...  It did major updates daily - ~ 50 G of data that was read from a warehouse, manipulatesd and then sent on to another server for use.  The drive with the databases was so fragmented it literllay had not 1 contiguous file and the free space was so broken up it had I think about 20G free and the largest chunk was about 150 MB.  I used a third party tool (Windows defragger wouldn't touch it) to defrag the drive(took days and several passes the first time) and performance would jump, but then within days it was horrible again.  I had to defrag everyday and we couldn't understand why.  Finally, we had a hardware breakthrough and a drive (or array I'm not hard guy and can't remember) finally went bad enough to be seen and replaced.  After that the server was fine.  The lesson I took away from that was that when the drives were pushed, the defragmentation was the only thing that kept them limping along.  If the drives weren't in perfect shape they couldn't handle the IO fast enough (presumably due to the looming hardware problem and it was probly having to reread).  So I think the defragmentation of the data files can and does have impact on the performance.  It might be slight or be profound depending on all the circumstances. 

  •    Defragging does gain an increase in SQL Server performance but is a pain to accomplish with Windows.  Some other ways to increase performance is to place your indexes in their own file group (except for the clustered indes which should be in the Primary group).  Also, determine what size your database really need to be.  Setting the correct size when new will keep fragmentation away from the DB.  If it must grow, set it to grow by a decent size as the DB uses resources to grow (I set mine to 5G increments but I have lot's of disk space).  Another thing to remember is to not exceed 80% capacity on your disk drive. 

    Don't forget to rebuild your indexes from time to time including the clustered ones.  I have found that dropping all non-clustered indexes, then rebuilding the clustered indexes, and finally recreating the non-clustered indexes keeps the DBs running very smooth.

    Hope this helps

     

    Marvin

    Marvin Dillard
    Senior Consultant
    Claraview Inc

  • Best to do both.  That is defrag your indexes and also do a disk defrag.  But to get the most of your disk frag - you will need to take SQL Server offline so that the MDF is most efficently defraged.  I have used diskkeeper and its a good product though prob not really needed for a db server - also it does not work any more efficent then the one that comes bundled with windows - I think diskkeeper provides windows with that utility also.

    Diskkeeper is good because it provides the ability to set a prioirty and let it run on its own when ever the server is least busy or you can schedule it.  So in my opinion using the Windows Defrag for a database server is good enough as you should take the DB offline during off peak times and defrag the disks.  Also - I am told that defragmentation over TS takes longer - I am not convinced on this - but thats what some people have told me.

  • PerfectDisk is the one I used and it did defrag what windows would not.  But, the disks I was working on were impossibly fragmented.  I probably really should have copied them off and back on the disk to defrag them the first time but we couldn't stand the complete downtime.  I'm not advertising - I haven't used any others for windows except perfectdisk so I can't compare.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply