The Real World: Fragmented Disk and High PAGEIOLATCH Waits

  • I don't post on here much, but I've been following different articles on this site for years. I'm a sysadmin who converted to the dba role. After enough times of the boss telling you to install and support sql because it "runs on the server" you are forced to learn a thing or 2...

    I agree with everyone who has posted to try to pre-allocate and all the preventative maintenance items. Those are all great things and should be done. Unfortunately, sometimes you are the guy who walks into a new employer or client and finds a big mess looking like the red screenshot from this article. You know the previous guys didn't do it right, but you have to clean it up...

    The other posters had some good suggestions for maintaining fragmentation of the physical files on the volume. Another option is contig which is a free download, but you'd have to setup a quick script to run it. I like to use smaller files in the same file group to make if you have to defrag easier, so does maintaining 40% free space on that volume, or making sure that the free space is > size of teh largest file by a bit.

    I always start by checking partition alignment and sector size. Someone asked for the commands, they are documented in multiple articles online and google/bing should find them easily. If you have trouble try this in a command prompt on 2003 or an admin prompt on 2008+:

    fsutil fsinfo ntfsinfo <driveletter:>

    if you see 4096 displayed for "Bytes per cluster" you are formatted as the default settings from windows. You would like to see a value of 65536 in this field. Security best practices say don't enable xp_cmdshell, but you could run this command using that feature if you wanted to. Perhaps on the flavors of 2008 that support powershell SQL jobs you could investigate running it there.

    For partition alignment same as above, google/bing work or run diskpart:

    diskpart

    list disk

    select disk (number of disk you are checking)

    list partition

    this will display all parititons on the disk along with their offset - you will see 1024 on 2008 servers and on 2003 the default is 32k. If you ran the command : create partition primary set align=64 (in diskpart) when you created the partition then you will see 64k. I've been formatting my new volumes on 2003 servers scheduled for upgrade to sql 2008 as 1024 so they match what Windows 2008 is doing by default. The upgrade is mostly migration, moving from older windows 2003/sql2000 to 2008r2/sql2008r2.

    So far so good 51/75 sql 2000 boxes to upgrade remain...

    -R

  • I just wonder, is 400 fragments for 300GB database file is a really-really bad thing which affects performance? It means that an average fragment size is around 755Mb. Isn't it big enough? Does it worth running disc defragmentaion in this case?

    I would be much more concerned about logical fragmentation of indexes(internal and external) rather than physical file fragmentation.


    Alex Suprun

  • Alexander Suprun (4/21/2012)


    I just wonder, is 400 fragments for 300GB database file is a really-really bad thing which affects performance? It means that an average fragment size is around 755Mb. Isn't it big enough? Does it worth running disc defragmentaion in this case?

    I would be much more concerned about logical fragmentation of indexes(internal and external) rather than physical file fragmentation.

    Well Alex, I believe it did have an impact because a single table in this particular database was about 170 GB at that time. Fetching records from this huge table could pose a problem if the pages are on non-contiguous sectors. We did need to do some index defragmentation too anyway. What do you think?

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • You might want to consider using SysInternals (no part of Microsoft) contig to defrag. This is a free command line tool but allows you to specify a single file to defrag - giving you much more control over the defrag process. Also defrags MFT if needed - and can produce a list of fragmentation

  • Once again I have to recommend SysInternals free contig command line tool - if your SQL files are in or below e.g. E:\SQL_DATA you can use command

    contig -a -s E:\SQL_DATA

    to get a list of the file fragmentation - it will look a bit like

    Contig v1.6 - Makes files contiguous

    Copyright (C) 1998-2010 Mark Russinovich

    Sysinternals - http://www.sysinternals.com

    E:\SQL_DATA\DB is defragmented

    E:\SQL_DATA\DB\DB1.MDF is in 6 fragments

    E:\SQL_DATA\DB\DB2.mdf is defragmented

    E:\SQL_DATA\DB\DB3.mdf is in 2 fragments

    E:\SQL_DATA\DB\DB4.mdf is defragmented

    E:\SQL_DATA\DB\DB5.mdf is in 4 fragments

    E:\SQL_DATA\DB\DB3.mdf is in 4 fragments

    E:\SQL_DATA\DB\DB7.MDF is defragmented

    E:\SQL_DATA\DB\DB8.mdf is defragmented

    E:\SQL_DATA\DB\DB9.mdf is in 9 fragments

    E:\SQL_DATA\DB\DB10.mdf is defragmented

    Summary:

    Number of files processed : 11

    Average fragmentation : 2.81818 frags/file

    above file names are anonimised for the post but taken from real life - here e.g. DB9.mdf is a bit fragmented - fix with

    contig -v E:\SQL_DATA\DB\DB9.mdf

    after that it was listed as defragmented.

  • Whether 400 fragments for a 300GB file is important really depends on whether your particular access patterns could occur with only very short head, less frequent drive head movement, but the fragmentation is resulting in longer drive head movement. Perhaps only a small area at the beginning or end has most of the fragments, and that's where most of your I/O is. Perhaps not.

    In general, when I see a DB file in more than a handful of fragments, I try to defrag it whenever the server can handle some performance degradation.

    For commercial products, Perfect Disk and Diskeeper are both popular.

    For other products, Sysinternals Contig is useful, as is Piriform Defraggler, Ultra Defrag, and others.

    FYI: even the SSD's I've tested show substantially better sequential performance than random performance.

    ETA:

    wmic partition

    or

    wmic /node:"server" partition get /all

    will show the partitions starting offset in bytes, so you can see what it's aligned to

    wmic volume

    or

    wmic /node:"server" volume get /all

    will show the volume block size.

  • James Horsley (4/21/2012)


    You might want to consider using SysInternals (no part of Microsoft) contig to defrag. This is a free command line tool but allows you to specify a single file to defrag - giving you much more control over the defrag process. Also defrags MFT if needed - and can produce a list of fragmentation

    Does Sysinternals require much free disk space?

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • Does Sysinternals require much free disk space?

    At the end of the day to fully defrag a file is going to need quite a lot of free space whatever tool you use - because it has to find a contiguous block of disk to write the file to. Contig is mainly a file by file defragger so does not use multiple passes to try and get the optimum layout etc. but does allow you to target just the specific files that would benefit most from a defrag. One of the best tools I have found for a more controllable defrag strategy is MyDefrag the successor to JKDefrag - http://www.mydefrag.com/index.html this allows you to specify the style of defragmentation you want - so for example you can ask it to try to consolidate free space. It also supports a special defrag scripting language - it is quite complex - so you will need to read the info on the site carefully.

  • For a good overview of how fragmentation affects SQL Server performance, check out Brad McGehee's article on the subject: http://www.bradmcgehee.com/2011/06/do-you-ever-physically-defragment-your-sql-server-mdf-ldf-files/[/url]

    On a (somewhat) related topic, also read Paul Randal's blog on why you should avoid shrinking databases: http://www.sqlskills.com/BLOGS/PAUL/post/Why-you-should-not-shrink-your-data-files.aspx Only reason I link this is we've been discussing growths but many times people don't realize the damage they're doing in using shrink task.

    =============================================================
    /* Backups are worthless, Restores are priceless */

    Get your learn on at SQL University!
    Follow me on Twitter | Connect on LinkedIn
    My blog: http://sqlchicken.com
    My book: Pro Server 2008 Policy-Based Management

  • As it has moved the database to the newly extended part of the volume, will this be slower (assuming single large disk, or even a number of RAID disks) due to the effect of the new space being on the slow part of the disk?

    On a new drive, Windows will allocate new files to the outside of the disk - the fastest part, so as the system has moved the files to the newly extended part of the disk, wont this also be the slowest part of the disk?

    The way to remedy this would be to format the disk or use disk software designed to speed up disk access.

    Also, have you considered Diskeeper, which is SAN aware (if its on a SAN) and proactively reduces fragmentation so that you dont have to run long defrag cycles?

  • FYI for folks running older versions (2010) of DiskKeeper, there was major issue found with IntelliWrite and SQL Server: http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/12/29/diskeeper-2010-update.aspx

    For those of you that don't have a defrag product with specialty features, I highly recommend you do NOT try and defrag using native utilities as you could tank performance of your SQL Server data files due to the way it tries to restructure everything on disk.

    =============================================================
    /* Backups are worthless, Restores are priceless */

    Get your learn on at SQL University!
    Follow me on Twitter | Connect on LinkedIn
    My blog: http://sqlchicken.com
    My book: Pro Server 2008 Policy-Based Management

  • Just to clarify there are differnt operating systems:

    http://www.whylinuxisbetter.net/items/defragment/index.php?lang=

  • jswong05 (4/23/2012)


    Just to clarify there are differnt operating systems:

    http://www.whylinuxisbetter.net/items/defragment/index.php?lang=%5B/quote%5D

    1. Your link is not related to the topic discussed at all.

    2. It doesn't explain what will happen if there is no big enough chunk of free space. Will Linux magically create one?

    3. It doesn't explain what will happen if the file needs to grow. Will it be moved to another place on the disk? Ha, nice strategy.


    Alex Suprun

  • jswong05 (4/23/2012)


    Just to clarify there are differnt operating systems:

    http://www.whylinuxisbetter.net/items/defragment/index.php?lang=%5B/quote%5D

    I am wondering what you are doing on SSC. Have you found a way to run SQL Server on Linux perhaps?

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • kennethigiri (4/24/2012)


    jswong05 (4/23/2012)


    Just to clarify there are differnt operating systems:

    http://www.whylinuxisbetter.net/items/defragment/index.php?lang=%5B/quote%5D

    I am wondering what you are doing on SSC. Have you found a way to run SQL Server on Linux perhaps?

    Actually - while not defending the rather lame post - now you can officially mount databases from SMB shares in SQL Server (since SQL 2008 R2) it is possible to have your database files on a Linux disk ...

Viewing 15 posts - 31 through 45 (of 53 total)

You must be logged in to reply to this topic. Login to reply