Dropping and restoring DB to fix file fragmentation

  • Jeffrey Williams 3188 (12/2/2011)


    You can reduce your backup time by using one of the third-party tools that compresses backups.

    Redgate's SQL Backup

    Quest's Litespeed

    Idera's SQL Safe

    All of the above are good options, and each one comes with a fully functional trial edition. I can backup a 1.5TB database in just about 2 hours using Litespeed, and my 300GB database gets backed up in less than 30 minutes.

    We are using native SQL backup and able to backup 2TB in 2 hours. Compression plus multiple files.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • There is a good chance that you will not see any performance improvement from physically defragging the database files.

    If you still want to, you can use Diskeeper to defrag the files while they are in use. They have a 30 day free trial, so that might be enough for you if you only want to do it one time.

    http://www.diskeeper.com/disk-defrag/sql-server-performance/

    I have also heard you can use the free sysinternals CONTIG.EXE to defrag database files while they are open.

  • Jeff Moden (12/2/2011)


    Ugh! Be careful folks. On a "crowded" hard disk, dropping and restoring a DB may not be as effective for defragging as you think. There may be little contiguous room on the hard drive.

    That is a very good point. In this case there was just the DB files on the disk (117MB of unfragmented files and 0%fragmented free space on a 1.5TB drive) once I cleared all the files I could off it.

    They ended up giving me the server for the weekend so I had time to do anything I needed and the verdict is...It may help but it may not. One DB restored contiguous. The other restored with even more fragments than it had before. It didn't matter which order I started the restores in. The DB that restored contiguous always restored contiguous and the DB that restored fragmented always restored fragmented even though both were fragmented before.

    During the restore the LUN with the logical disk filled up causing it to go offline. The SAN guy resolved that and I started the restores again. Both DB files were now laid out fragmented but with only two or four file fragments each regardless of which order I started the restores in.

    A factor in this is that the LUN isn't dedicated to this server (apparently) so that may be affecting results. I was also doing the restores at the same time letting the first fully allocate the file before starting the second restore but that shouldn't make a difference.

  • Michael Valentine Jones (12/3/2011)


    There is a good chance that you will not see any performance improvement from physically defragging the database files.

    Since this is a reporting DB with lots of sequential writes and reads there's a good chance we will from that perspective. The fact that it's on a SAN with an unknown configuration (at least to me right at the current time) may mean that defragging won't make a performance difference.

    What prompted the defrag wasn't performance concerns. It was a checkDB issue likely related to sparse file allocation due to fragmentation:

    http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx

    I'll keep those tools in mind for the future but since I have the server until Monday having the server down for the defrag that I am going to have to do isn't a concern.

  • If you need to keep the database online and you have another available LUN on your SAN, you could move all your clustered indexes, then your non-clustered indexes off to another file group on a different LUN with the CREATE CLUSTERED INDEX...WITH (DROP_EXISTING=ON, ONLINE=ON) ON [YOUR-NEW-FILE-GROUP].

    Depending on your version of SQL Server, this MIGHT help you move data off and then back on again with out taking the database offline.

    -Eric

  • Eric Niemiec (12/3/2011)


    If you need to keep the database online and you have another available LUN on your SAN, you could move all your clustered indexes, then your non-clustered indexes off to another file group on a different LUN with the CREATE CLUSTERED INDEX...WITH (DROP_EXISTING=ON, ONLINE=ON) ON [YOUR-NEW-FILE-GROUP].

    Depending on your version of SQL Server, this MIGHT help you move data off and then back on again with out taking the database offline.

    That could end up being a huge effort and really slow down an online database because a lot of people's clustered index on a table is also the PK of the table. If DRI (hopefully) has been instantiated, then you'll also need to drop a wad of FKs and reinstantiate those as well.

    As a side bar and a bit of a reminder, rebuilding clustered indexes helps database level fragmentation... not disk level fragmentation. Also as a reminder, moving the clustered indexes also moves the data itself.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I am aware that moving the clustered indexes moves the data itself and I guess that was my point, move the data off to a different file group on a different set of disks when it will be written out sequentially during the move and then move it back when and if desired – again, written sequentially to a file group on the original drives thus reducing disk level fragmentation. (I assume that this is my misunderstanding and that disk level fragmentation could only be corrected in this fashion if you were writing it back to a newly created file every time? Not sure.)

    What I was not aware of is that you would need to drop the FKs during on online rebuild with move (create with online & drop existing) – I though the whole purpose of online rebuilds in the enterprise edition was to avoid this kind of extra effort and down time? I understand the performance degradation that would take place during these operations, but it could be batched in smaller operations during times of less use – my thought was that at least it keeps the entire database online during the process.

    Any more information or details on the above topic would be great to help me understand it all better.

    Thanks for the reply.

    -Eric

  • My apologies. I'm not sure if the "DROP EXISTING" will cause the problem with FKs. I mistook you to mean that you were doing explicit drops. Using "DROP EXISTING" may not require the FK drops that I spoke of.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply