Indexdefrag log swelling

  • Hi all,

    We have had an on-going issue with index defragmentation taking long periods of time to complete, and I wanted to see if you knew a cleverer way of doing things. The requirement is that we defragment indices (on SS2K tables that are log ship destination). We are constrained to use Indexdefrag as we may not take down servers UNLESS there is a breakage that needs to be fixed, thus no DBReindex.

    The problems are that the work must be done over the weekend (I have no authority to dictate otherwise) and given previous experience the compaction phase alone will often last for several hours, thus exacerbating complaints about the O/Time bill (as a footloose and single man I want my free time much more than the extra cash so it’s as painful for me to do it, as for the company to pay it!). Secondly, log sizes swell dramatically, and I’ve seen them bloat to 80-90gb’s on a T/Log drive of 250gb, so I needed to monitor constantly and cap the bloat before it became critical, thus disrupting the indexdefrag progress. I know that alternating between Backup Log/Shrinkfile will reduce the log sizes, and won’t set Indexdefrag back significantly, but this still means constant monitoring, and again I would far rather get the job done with a process that might well run over the whole weekend, but need no major input from myself other than at the start, and the end.

    Is there perhaps a script out there that automates as much of the defrag as possible, and includes a variable that allows alternation between Backup Log/Shrinkfile whenever log size hits a certain amount, and moves onto the next index when the previous one is fully defragged? Luckily we have plenty of room on another drive to backup the T/log.

    Thanks all,

    BLL

  • Working backwards...

    TLog backups - I've found in the past that it can help to increase the frequency of TLog backups during heavy index maintenance from, say, hourly to every minute.

    As for the reindexing itself...

    (1) You say that you're doing this on the Log Ship Destination? Is that right?

    (2) You're running SQL Server 2000. The bulk of the index defrag tools out there (eg those by Ola Hallengren, or Michelle Ufford) are aimed at SQL2005 and better - let's face it, SQL2000 is way past its use-by date now.

    However, there are those of us who are still stuck with it (including me), and I did lash together a script that has a stab at the job - see http://thelonedba.wordpress.com/2011/07/29/index-maintenance-for-sql2k/. Obviously, it's not as good as those above, and nicks a few of their ideas, and shouldn't be used in production without testing it out first... There's also no support associated with it. It works for me on some large production databases, though. Given that you can't use DBCC DBREINDEX for whatever reason, you'll need to tweak the script a bit.

    Thomas Rushton
    blog: https://thelonedba.wordpress.com

  • (1) You say that you're doing this on the Log Ship Destination? Is that right?

    I caught that as well. That does not sound right.

    As for the transaction logs swelling during your maintenance period, you could do the following:

    - maintenance window starts

    - take full or differential backups of your databases that you are going to perform maintenance on

    - change the recovery model of the databases in question from full to simple

    - perform index maintenance

    - change the recovery model back to full

    - perform another backup

    While in simple recovery the databases will not be logging events to the transaction log, so they will not grow like crazy.

    Joie Andrew
    "Since 1982"

  • Here are several things to hold transaction log growth to a minimum that I have done in the past:

    1. Only defragment indexes that need it, say more than 30% fragmentation.

    2. Adjust the fill factor of indexes that get high fragmentation. Reducing the fill factor of an index from say 90% to 70% can reduce fragmentation a lot and reduce the defrag time a lot.

    3. Schedule the transaction log backups at very frequent intervals while the defrag is running, say 5 minutes or less.

    4. Introduce short wait periods in between each defragmentation step. For example, if an index takes 5 minutes to defragment, wait 2 minutes after the step to give the log time to clear. I have also done this by putting in a wait loop that checks the % of space used in the transaction log, waits for one minute if it is above a certain percentage, and then checks again.

    5. Do not try to defragment all tables every week. Split them into several groups that get defragmented every 2, 3 or 4 weeks. Or only defragment the small tables every week, and split the larger tables into groups that get defragmented every 2, 3, or 4 weeks. In most applications, defragmenting once per month is enough to maintain good perfromance.

    I would stay away from changing the recovery from full to simple. That is likely to cause more problems than it solves.

    I would also stay away from shrinking the log file. It will just grow again and you can end up with a massive number of VLFs and hurting perfromance. 80 to 90 GB isn't really that excessive on a 250 GB database.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply