Handling Transaction Log Growth after Optimizations Job

  • There is so much information about the correct way to shrink the Transaction Log file out there my head is spinning.   I have tried to read all of the discussions on this board, BOL, and various other sites for the correct way to deal with this. I think I’ve narrowed it down to what I need to do however; I wanted to post the solution I’ve come up with and see what the gurus think.

     

    System Information:

    SQL 2000 - 8.00.760 - SP3, Enterprise Edition

    Cluster Environment

    Running Continuous Merge replication, with one subscriber.

    24*7 web application

    Database activity is at its lowest:  Between Midnight and 6am

     

    Database Size Currently:  8 Gig (and growing)

    Recovery mode set for database:  Full

    Backup Database: Every 12 hours

    Backup the Transaction Log:  Every hour on the hour (except the hours when Full Backups run)

     

    Issue:

    Every Tuesday and Saturday night I am running the Maintenance Plan Optimizations

    Ø      Reorganize data and index pages

    Ø      Reorganize pages with the original amount of free space

     

    Once this has run my Transaction Log goes from being 1000MB to 4495MB.

     

    I need to shrink the log file but I also need to be able to restore to a point in time.

     

    Possible Solution Scheduled a Job with 4 steps to do the following:

    Step 1

    BACKUP DATABASE  MYDB

    TO DISK =  'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MYDB\MYDB_Post_Opt.bak'

     WITH DESCRIPTION = 'Post Optimization Full Backup', RETAINDAYS = 5,NOINIT

     

     

    Step 2

    BACKUP LOG MYDB

    TO DISK =  'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\ MYDB \ MYDB_Post_Opt.TRN'

    WITH  DESCRIPTION = 'Post Optimization Transaction Log Backup',  RETAINDAYS = 5, NOINIT

     

    Step 3

    BACKUP LOG MYDB WITH TRUNCATE_ONLY

     

     

    Step 4

    DBCC SHRINKFILE (‘MYDB’, 1000)

     

    So my question is…..is this the correct way to do this?  I want to make sure I am not INVALIDATING my backups, as I’m sure it is for the rest of you, data protection here is key.

     

    Should I be doing another Full backup after the DBCC SHRINKFILE? Skipping the LOG backup WITH TRUNCATE_ONLY?

     

    This works in our test environment and the log shrink to the desired results but is there another alternative or a better way of doing this that I’m not thinking of?

     

    Thank you for taking the time to read this lengthy post,

    Barbara

  • Hi,

    yes, you should do another full backup after the shrink file, because if you have to restore from the earlier backup you will have to do the dbcc shrinkfile again...

    And yes, you should skip the backup log with truncate_only, because any transaction occuring between step 2 and step 3 will be lost in case of a recovery, meaning you lose the ability to do a point-in-time-recovery!

    One thing to do would be to do backup log every 15 minute during the index optimization, that might help in keeping the log size down...

    regards karl

    Best regards
    karl

  • As it seems that you expose yourself to a risk of losing data entered/modified during the optimization job by not taking log backups at that time, you might choose a bit different approach anyway. Either to switch to simple or to bulk_logged recovery models and probably not dealing with log shrinking at all. Also performing two full backups one after another seems a little inconvenient to me. Though, it depends on your needs.

  • Karl and Martin,

    Thank you both very much for responding.

    I tried setting up additional TL backup jobs but it appears that the Optimization job actually makes the TL Backup job take over an hour to complete.

    I'm going to look into switching to Simple recovery mode (testing it now) to see if this is an option for us.

    Thank you again!! 

    Barbara

  • I don't think Simple recovery is advisable in a production environment. Then you'll only be able to  recover to a specific backup, not to any point in between backups, because you will not have any transaction logs. 

    Do you need to reorganize so frequently ?

  • you've got it right, homebrew01, not simple, but bulk_logged is the way to go..

    karl

    Best regards
    karl

  • I wouldn't recommend Bulk-logged either in a production environment.

    Recovery modelBenefitsWork loss exposureRecover to point in time?
    SimplePermits high-performance bulk copy operations.

    Reclaims log space to keep space requirements small.

    Changes since the most recent database or differential backup must be redone.Can recover to the end of any backup. Then changes must be redone.
    FullNo work is lost due to a lost or damaged data file.

    Can recover to an arbitrary point in time (for example, prior to application or user error).

    Normally none.

    If the log is damaged, changes since the most recent log backup must be redone.

    Can recover to any point in time.
    Bulk-LoggedPermits high-performance bulk copy operations.

    Minimal log space is used by bulk operations.

    If the log is damaged, or bulk operations occurred since the most recent log backup, changes since that last backup must be redone.

    Otherwise, no work is lost.

    Can recover to the end of any backup. Then changes must be redone.
  • Thank you all for the additional information.  I know how sometimes things can come across not the way you intended in a posting so I want to say that I’m not questioning the additional comments that were made, I’m just trying to understand them better, I'm really confused now. 

    Homebrew01: 

     

    Is reorganizing twice a week too much?

     

    If I did it only once a week would it take longer to run since the time in-between running it has increased?

     

    Even if I did it only once a week how would I keep the transaction log from growing to 4Gig, which is the main issue?

     

     

     

    I think I may be getting my Simple and Bulk-Logged models confused.  Which one can you not perform TL backups with?

     

    My plan was to use either the Simple or Bulk-Logged models and do the following:

     

    Full Backup once a week

    Differential every day (or maybe even twice a day)

    Transaction Logs every hour

     

    I was under the impression that if I had this in place I would at the most only lose one hours worth of work.

     

    Now I’m really confused.   All this because my TL grows to 4Gig twice a week from optimizations.

    Does anyone else run into this when they run Optimizations? 

     

    Thank you again for your help,

     

    Barbara

  • If you don't use "backup log with truncate_only" then your plans will work.

    If the TL gets to 4 GB twice a week, i would simply not mind...

    As long as the disk space is available, why go to all the hassle?

    regards karl

    Best regards
    karl

  • Simple Recovery (common in developement or static environments) - You cannot backup transaction logs. If you have a hardware failure, or data corruption, you will have to revert to your last full backup & LOSE everything since then.

    Full Recovery (recommended in Production) - You can recover to a point in time. Restore full backup, then apply transaction logs forward either to a specified point in time, or right up to the failure.

    Books Online help has many pages that go into the detail of the 3 recovery types, and the best methods for backing up & restoring databases.

    It's hard to say if you're optimizing too frequently.  Are you doing it for a particular reason, or "for the heck of it" ?    I'm sure BOL or Microsoft has some best practices on that.

  • I thought that if I ran it twice a week it would help with performance, so I guess you could say I’m doing it for the heck of it. 

     

    Ideally I’d like to have the Database in Full Recovery mode, but how do I do this and keep the log growing to 4Gig every time I run Opt?  When you run optimizations does your log significantly grow?

     

    I wish I could just let the log stay at 4Gig but then it increases the run time for the TL Backups by about 20 minutes and since I back the TL every hour it’d eat up a lot of disk space. 

     

    Thank you both very much for taking the time to discuss this with me I really appreciate the insights.

     

    Barbara

  • Perhaps you should run your optimization jobs at a slow period once a week, right before a full backup. Then after the full backup, your t-log should be truncated back to the min size

  • Just a couple of thoughts of mine.

    The bigger physical size of transaction log file shouldn't mean longer transaction log backup time because only actual transaction activity is stored in backup - so it doesn't matter if the size of file is 1 or 10 GBs, the size of backup file will be the same - only transaction records count.

    Although I agree that simple recovery model should be generally avoided in production there are times when it's handy. In my place I use these steps:

    0. Make sure that all changes before switching recovery to simple are captured in log backup

    1. Switch off transaction log backups.

    2. Switch to simple, which will even be faster than bulk-logged.

    3. Run the index rebuilds.

    4. Switch to full recovery mode.

    5. Run a full backup.

    6. Start the transaction log backups.

    After full backup in step 5 your switch to full recovery model is complete and you gain point-in-time recovery after the first tr.log backup. You expose yourself to the loss of transactions entered in the window between the end of the last transaction log backup (step 0) and the completion of full backup (step 5). I don't know if this is acceptable for you but when using full and bulk_logged recovery models you must take transaction log backups during optimization job, otherwise you lose the transactions just the same in the event of a failure.

    As to how often you should run optimization jobs - you can analyze how fragmented you database gets over time. DBCC SHOWCONTIG command is a good tool to do that.

     

     

     

  • Martin .... good points.  I do the same thing. I run an analysis of index fragmentation every couple of months, and then on a Sunday, I switch to Simple recovery, reindex, then switch back to Full recovery.   It might be tougher in a 24/7 environment.

  • Martin - thank you very much for your input.

    You've all given me a lot of great advice and I will definitely put it to use in coming up with a plan to take care of this issue.

    Thanks again for all the support - you've all been a great resource for me!

    Barbara

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply