Log backup job running long time

  • Hi,

    We have log backup job running daily 5 times. One time run is taking 4-5 times longer than the other runs daily. The backup size is same for all runs. How can we identify why it is taking long time for that particular time? We have DBCC check job scheduled 2hr before that log backup job run but there is no difference in log backup size.

  • Any process that runs prior to that 'large' transaction log backup that changes a lot of data is going to cause a lot of transactions to be recorded in the transaction log and that all has to be backed up.

    The actual problem here is that you are only running a transaction log backup 4-5 times a day.  This means you are willing to lose hours of data in the event of a disaster.  If you really don't care about recovery then it might be a better option to just switch the database to simple recovery.

    Assuming that isn't the case - then increase the frequency of your transaction log backups to be at least every 15 minutes.  And make sure you are not backing up to a single device - each log backup needs to be to a new timestamped file.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • ramana3327 wrote:

    Hi,

    We have log backup job running daily 5 times. One time run is taking 4-5 times longer than the other runs daily. The backup size is same for all runs. How can we identify why it is taking long time for that particular time? We have DBCC check job scheduled 2hr before that log backup job run but there is no difference in log backup size.

    The obvious "problem" is that your DBCC check job is using a bunch of IO and CPU.  I agree with Jeffrey Williams that you should be backing up more often but that may not help your overall situation.

    How many CPU core do you have, are they hyper-threaded, and what is your system-level MAXDOP set to?

    Also, where are your backups being stored?  Hopefully not on the same machine that your backups are being stored.  That's a terribly wrong thing to do for many reasons.

    Also, are your backups using compression and do your backup procs make use of BUFFERCOUNT  and MAXTRANSFERSIZE?  If not, you should set the default for backup compression to ON, and start the other two settings at 17 and 1048576, at least as "starter" settings (which is what I'm currently using... first educated guess for me paid off big time!).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply