Weekly Maintenance

  • I know this topic has been discussed several times, but I just can't find the answers that I am looking for.

    I have created a stored procedure that does the following...

    1. Integrity Checks (DBCC CHECKDB)

    2. Index Defrag / Rebuild

    Fragmentation Guidelines

    Index smaller than 100 "Pages" = do nothing

    Logical Fragmentation" < 10% = do nothing

    Logical Fragmentation" >= 10% AND < 30% = DBCC INDEXDEFRAG

    Logical Fragmentation" >= 30% = DBCC DBREINDEX

    3. Update Statistics

    4. Full Database Backup

    5. Transaction Log Backup

    Full/Bulk Logged: Transaction Log Backup

    Simple: Bypassed, cannot perform Transaction Log backup.

    Does anyone see anything wrong with what I am going to do? Or anything that I should add? My average database size is around 20gigs and the reason it is in a SP is because I have lots of databses to maintain a script would make it easy to deploy to new DB's. I also have a very limited window of time to work with for maintenance every week. It is a 6 hour window, but I have to bring systems back up as soon as they finish.

    Also, on a few of my production databases we run in bulk-logged recovery mode and my ldf files are almost as big as my mdf files. What is the best practice to keep the ldf file under control? I have read articles that say to shrink, lots of articles that say not to shrink and others that say only when you need it. I run a t-log backup every hour and my t-logs average 5MB and peak to 30MB. I understand the ldf file grows with the database as it is used, but is it normal to have ldf files this big?

    Thanks in advance.

  • I will answer your second paragraph.

    to keep the ldf files down, increase the frequency of your log backups. that is best practise.

    Shrinking is not good practise as it is normally a last resort for two reasons, one you run of of disk space or you are not backing up the log frequently enough.

    As well as that, shrinking has a performance hit as shrinking and regrowing will increase fragmentation. as you get into a cycle of shrinking and growing and having to rebuild the indexes and stats.

    --------------------------------------------------------------------------------------
    [highlight]Recommended Articles on How to help us help you and[/highlight]
    [highlight]solve commonly asked questions[/highlight]

    Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
    Managing Transaction Logs by Gail Shaw[/url]
    How to post Performance problems by Gail Shaw[/url]
    Help, my database is corrupt. Now what? by Gail Shaw[/url]

  • One think I forget to mention, backing up the log will not shrink the physical size of the log file, all it will do, is increase the amount of free space in the log file. to physically alter the size of the log file, if it is too big, you would need to shrink it.

    --------------------------------------------------------------------------------------
    [highlight]Recommended Articles on How to help us help you and[/highlight]
    [highlight]solve commonly asked questions[/highlight]

    Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
    Managing Transaction Logs by Gail Shaw[/url]
    How to post Performance problems by Gail Shaw[/url]
    Help, my database is corrupt. Now what? by Gail Shaw[/url]

  • Looks like a good plan overall. I agree with Silverfox, don't shrink.

  • In addition to what SilverFox has stated:

    Step 3 is only necessary on indexes that have not been rebuilt. If your index was reorganized - then you need to update statistics. If you modify your process to use sp_updatestats instead, only statistics that need to be updated will be updated. And, if you use sp_updatestats @resample = 'RESAMPLE', then the previous sampling rate will be used instead of the default sampling rate.

    Step 5 is also not necessary, because you should have a separate job that performs transaction log backups. If this is the only time you backup your transaction log - then yes, your transaction logs could get quite large.

    And finally, I don't see any steps in here to remove old backup files. Are you keeping your backups forever, or do you have a separate process for cleaning that up?

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • sorry guys for a late response to all of your answers...BUSY monday morning 🙁

    For step 3 I am running sp_updatestats.

    I wasn't to sure about step 5 myself. I aready have a procedure that backs up the transaction log, and archives it according to day and I keep 7 days on hand.

    I do have a procdure to remove the old backups. I didn't include that in my orginal post. It is part of my step 4.

    Do all of you guys run custom mainteance scripts or do you modify the Mainteance plan jobs to fit your database. I am a somewhat new DBA and I have been following practices set by somebody that has been with the company for serveral years. And we have migrated into a 24X7 shop and uptime has become very important.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply