Perf hit for Increasing tran log frequency

  • we are currently doing a 1 hour tran log backup for a system, I'd like to push that up to 1 minute. Is there a performance hit to be concerned about with doing this?

    I've gotten some pushback regarding performance and doing tran log backups this frequent. My view is that each hit will be much faster, but more frequent.

    What is the actual hit to SQL for a tran log backup?  locking, waits?

    Server is 2019 CU 8 Standard edition, 4 core, 32 gb on virtual server 2019.

    All data/log files are on consolidated SSD storage.

    Edit:  I should also mention that each inserted record is about 8k and up to 1000/day. Web app users will be pulling these records up and printing/downloading and their activity will be logged.

     

  • 1 min is just way too often.  Especially for only 1000 rows per day.  That's a miniscule load for SQL Server.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • A lot of places do tran log backups every 15 minutes.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • TangoVictor wrote:

    we are currently doing a 1 hour tran log backup for a system, I'd like to push that up to 1 minute. Is there a performance hit to be concerned about with doing this?

    I've gotten some pushback regarding performance and doing tran log backups this frequent. My view is that each hit will be much faster, but more frequent.

    What is the actual hit to SQL for a tran log backup?  locking, waits?

    Server is 2019 CU 8 Standard edition, 4 core, 32 gb on virtual server 2019.

    All data/log files are on consolidated SSD storage.

    Edit:  I should also mention that each inserted record is about 8k and up to 1000/day. Web app users will be pulling these records up and printing/downloading and their activity will be logged.

    I agree that once per minute is probably a bit much but the cool part is is that if a backup job (and I'm assuming the use of SQL Agent for the backup job scheduling) is already in progress and exceeds one minute, the job won't fire again until the current run completes.

    Part of the reason why people object to things like once per minute transaction log backups is because of the number of transaction log backups that you'd have to use to do a RESTORE.  Some people work around that by doing multiple DIF backups each day but even that can be a complication that's totally unnecessary.

    To explain with a simile, when you do the laundry, which clothes do you wash... the clean ones or the dirty ones or a mix of both?  Obviously, washing the clean ones is counter productive and a waste of resources and that means that doing a mix of both falls into the same category but that's what happens when most people do log file backups.  They never separate the proverbial clean clothes from the dirty ones and end up doing the backup laundry over and over even when there's no reason to do a log file backup because nothing changed and THAT is frequently going to be the case for you if you go to 1 minute backups...

    ... unless...

    ... you separate the clean laundry from the dirty laundry and that IS EASILY possible to do in SQL Server.  All you need to do is to check ALL your databases by using the following code to build your list of databases that have actually recorded something in the log file.  If a database doesn't show up, then no changes in the database have been made and you can skip doing the log file backup.  This will allow you to check if a log file backup needs to be made once a minute without having as many as 60 log file backups per hour to restore.

     SELECT  DBName = name
    ,database_id
    ,log_reuse_wait_desc
    FROM sys.databases
    WHERE state_desc = 'ONLINE'
    AND recovery_model_desc <> 'SIMPLE'
    AND log_reuse_wait_desc = 'LOG_BACKUP' --Might need to tweak this based on your server setup
    ;

    My log file backup job runs every 15 minutes and I use code very similar to the above (I actually store it in a Temp Table to control the backups from) because actually doing a backup even once every 15 minutes (or 30 minutes or whatever) is still counter productive  if there's nothing in the database that needs a backup.

    And, no... I still don't recommend a backup job running once every minute but, if you're hell bent on doing so for one reason or another, this will make restores much less painful with or without the addition of doing DIFs now and then.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Jeff, and no not hell bent, I liked your explanation and that query.

    We do native SQL backups but they are triggered by a separate system that manages all backups across the organization.

    The nice part on the restore is you only need to pick a time, the system manages all the files and handles the restore.

    Will probably settle on 15 minute backups. 4x more than current schedule, should be fine.

  • Jeff Moden wrote:

    TangoVictor wrote:

    we are currently doing a 1 hour tran log backup for a system, I'd like to push that up to 1 minute. Is there a performance hit to be concerned about with doing this?

    ...

    Edit:  I should also mention that each inserted record is about 8k and up to 1000/day. Web app users will be pulling these records up and printing/downloading and their activity will be logged.

    ...

    you separate the clean laundry from the dirty laundry and that IS EASILY possible to do in SQL Server.  All you need to do is to check ALL your databases by using the following code to build your list of databases that have actually recorded something in the log file.  If a database doesn't show up, then no changes in the database have been made and you can skip doing the log file backup.  This will allow you to check if a log file backup needs to be made once a minute without having as many as 60 log file backups per hour to restore.

     SELECT  DBName = name
    ,database_id
    ,log_reuse_wait_desc
    FROM sys.databases
    WHERE state_desc = 'ONLINE'
    AND recovery_model_desc <> 'SIMPLE'
    AND log_reuse_wait_desc = 'LOG_BACKUP' --Might need to tweak this based on your server setup
    ;

    My log file backup job runs every 15 minutes and I use code very similar to the above (I actually store it in a Temp Table to control the backups from) because actually doing a backup even once every 15 minutes (or 30 minutes or whatever) is still counter productive  if there's nothing in the database that needs a backup.

    And, no... I still don't recommend a backup job running once every minute but, if you're hell bent on doing so for one reason or another, this will make restores much less painful with or without the addition of doing DIFs now and then.

    A check for a specific condition(s) makes me very nervous:

    AND log_reuse_wait_desc = 'LOG_BACKUP'

    If, for example, it's 'ACTIVE_TRANSACTION' or 'CHECKPOINT', might I not still need to back up the log?  I don't know that I do, but even more critically, I don't know for sure that I can safely skip it.  Moreover, how do I keep up with changes to that desc in the future to make sure I don't potentially falsely skip log backups later too?

    I guess my big objection is to the EASILY claim.  I don't see this as being all that "easy" to do accurately.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • To answer the question, you could instead change the criteria you cited to...

    AND log_reuse_wait_desc <> Nothing'

    Like I said in the comment in the code I posted, the setting to chose may need to be tweaked based on esoteric knowledge of the server setup, which should be construed to include workload patterns.

    I understand your angst about the term "easily".  For the OP's workload, it is easy and that's who the comment was directed to.  The comment about the "tweaking" is for everyone else that may read the post.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    To answer the question, you could instead change the criteria you cited to...

    AND log_reuse_wait_desc <> Nothing'

    Like I said in the comment in the code I posted, the setting to chose may need to be tweaked based on esoteric knowledge of the server setup, which should be construed to include workload patterns.

    I understand your angst about the term "easily".  For the OP's workload, it is easy and that's who the comment was directed to.  The comment about the "tweaking" is for everyone else that may read the post.

    Actually you specified the criteria in your original code and stated that they were EASY to check.  Also, values other than 'LOG_BACKUP' are not "esoteric", they're in fact very common.

    My main point is that this is an area where you want 100% accuracy.  You don't want to skip a log backup to potentially save a few kb of disk only to later find out you lost permanently lost of production data because you skipped that backup!

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • This was removed by the editor as SPAM

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply