Transaction log full due to LOG_BACKUP

  • 1 of my production database's transaction log file is full due to some backup failure occurred overnight. This has caused the logs to grew until it ate up all available space. Our backup admin has manually kick start the backup, it is running at the moment but it's taking very long as the transaction log file is 2TB in size.

    I am thinking of adding another Tlog file to the database as there's still plenty of physical space on the drive. This is to allow the database to function normally while backup is running. Is this recommended? I can't extend the current log file as it's already max at 2TB.

  • Sorry... posted to the wrong question and removed this post.  See the next post down for my answer on this posted problem.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • On the subject of the backup failure...

    IMHO (and experience but YMMV), it's a complete waste of time to backup a 2TB log file (happened to me the 2nd day of my current employment but was only 250GB).

    Take another full backup, set the recovery model to SIMPLE, do a CHECKPOINT or two, set the database back to the FULL recovery model, and take a DIF backup (or another FULL, if you prefer) to restart the logfile chain.

    Then, make sure you get alerts if the logfile backup fails in the future.

    As a bit of a sidebar, I'd look into my index maintenance routine and eliminate the use of REORGANIZE.  It's not the "lower resource, log file friendly" tool that people say it is.  Like I tell and have proven to people, "It's better to do no index maintenance than it is to do it wrong... And, if you're generically using REORGANIZE, you're doing it wrong".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    On the subject of the backup failure...

    IMHO (and experience but YMMV), it's a complete waste of time to backup a 2TB log file (happened to me the 2nd day of my current employment but was only 250GB).

    Take another full backup, set the recovery model to SIMPLE, do a CHECKPOINT or two, set the database back to the FULL recovery model, and take a DIF backup (or another FULL, if you prefer) to restart the logfile chain.

    Then, make sure you get alerts if the logfile backup fails in the future.

    As a bit of a sidebar, I'd look into my index maintenance routine and eliminate the use of REORGANIZE.  It's not the "lower resource, log file friendly" tool that people say it is.  Like I tell and have proven to people, "It's better to do no index maintenance than it is to do it wrong... And, if you're generically using REORGANIZE, you're doing it wrong".

    Thanks Jeff for the reply.

    After some further investigation I've found that the transaction log grew huge due to index rebuild jobs which occurred to 3 times a week. During those days our each tlog size was more than 200GB compared to less than 10GB on normal days, this somehow impacts the speed of the backups. Apparently when the rebuild job, perform a rebuild on some super huge indexes alot of TLog was generated as we're using full recovery model. For now I've exclude several of these large indexes from the index rebuild job & perhaps schedule a downtime to do rebuild with simple recovery model.

  • I really should have a sticky on Jeff Moden's post about the (non)-sense of generic rebuilding / reorganizing indexes.

    A lot of resources are consumed during the reindex with little or no gain.

    The statitics updates still matter.

     

  • Good information from Jeff.   I would add to improve your disk system / LUNS, and network speed and reliability (good network topology is important).   Little things can help.  I have had similar experience in the data center arena.  Efficient anti-virus/firewall could help as well.

     

    DBASupport

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply