Transaction Log Growth

  • I can't believe the great suggestions all of you are offering up--thanks so much for your continued help with this! I definitely have some follow-up questions/comments, but before I get into those, I should let you know that we've figured out what was causing the growth of the full, diff, and oddly the transaction log backups...

    We have an application that has its own transaction logging table in the database. There was a process within the application that runs every two hours and that had gone awry. It was making a slight update to several thousands of records in one table, which in turn created several thousands of records in this corresponding transaction logging table. Of course this had been increasing the size of our backups at a much more rapid rate than it should have, leading to my issues and this post in general. We corrected the process, reverted the updates that shouldn't have been happening, and archived a bunch of records from that transaction logging table to bring it back to a more normal size.

    I can't thank you all enough for your recommendations and suggestions.

    pietlinden, thanks for the comment about creating a table to log activity. I'm planning to do that with sp_WhoIsActive very soon.

    Jeff, great information about all of the indexes. I never knew there were so many gotchas regarding rebuilding indexes, especially with online index rebuilds. Great information for the future!

    Chris, love that trace query. It actually helped me understand that it was something within this application that was causing the issue and not something within SQL Server itself. Really appreciate it.

    HanShi, thanks for this information. It helped reaffirm that what I was seeing was truly some odd behavior. Or at least not typical for a database that did have a full load and where log backups were taken regularly (and whose logfile isn't being shrunk). So again, thank you.

    Above results show the logfile of database [mydatabase] is growing roughly every other day. This is not normal behavior unless the database is new and did not had a full load yet. In a normal situation the logfile will grow to a certain size (over a few days or maybe a month) and stay at that size. Each time a log back-up is taken, all logging about finished transactions is flushed from the file and that space will be re-used by future transactions.
    So can you make sure:
    * are log backups taken on this database?
    * when was the last log backup taken?
    * is the logfile being shrunk periodically? --> if so: turn it off !!

    Lynn, thank you for these points:

    Two things. One, it appears that your log growth is set to a percentage rather than a fixed size. Second, you may want to look at how many virtual log files (vlf's) you have. I don't have the link handing for this second one but I am sure someone out there will provide it.

    Are you suggesting that log growth shouldn't be set to a percentage? Regarding the vlfs, I ran the query from the link Chris posted, and it shows 67 for the database in question. However, this is after the problems have been corrected, so I'm not sure if that's a "normal" number or not. I'm not sure what it would have been had I run it yesterday.

    Jason, thanks for this suggestion and interesting approach! I'm going to review it in a little more detail to see if that makes sense for us.

    First make sure your transaction log is reasonably sized, unless it's filling your disks you don't want to be constantly resizing it down. A good starting value is ~ 25% of the data size if it's a large database, make sure you are growing in MB not %. 

    To resolve this problem you have a couple options:

    #1 Add a secondary schedule to your log backup for ~ 10 minutes and disable it, next add two steps to your index maintenance job, one right before Ola is called that enables the 10 minute log backup schedule and one right after Ola's step that disables the 10 minute schedule (call sp_update_schedule and set enabled to 0 to disable and 1 to enable). This will cause you to backup logs every 10 minutes during index maintenance and every hour when you aren't doing index maintenance. This will help keep your log from filling up and having to grow. 

    #2 To decrease transaction log usage you could switch to BULK LOGGED during index maintenance, similar approach to #1 above, you add a step right before Ola's step that sets recovery to BULK LOGGED and a step right after that sets it to FULL, as soon as you go back to full you want to run a TLOG backup, you can do that by adding another step that calls sp_start_job to start your transaction log backup. 

    You technically can do both #1 and #2 if you need to, I'd start with #1. 

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Yes, I am suggesting that your log growth (or database growth for that matter) should NOT be set to a percentage.  If it is set to a percentage, each time it grows the size of the growth increases.  With the log file this can be problematic as the additional space in the log file has to be initialized before it can be used.

  • Mike Scalise - Wednesday, February 21, 2018 9:01 AM

    Jeff, great information about all of the indexes. I never knew there were so many gotchas regarding rebuilding indexes, especially with online index rebuilds. Great information for the future!

    To be honest, rebuilding or reorganizing indexes is generally a total waste of time and a really expensive way to rebuild stats when rebuilding indexes.  Yeah, I know... now you think I'm totally whacked but I've not done general maintenance of indexes on any of my production boxes since the 17th of January, 2016 (more than 2 years ago).  Performance got better during the first 3 months and hasn't decreased since then.  I do some occasional and very surgical maintenance but I didn't even do that for over 2 years.

    If you run the following query for your top 4 (or more) largest tables (by size, not by rowcount) and provide it in a spreadsheet, I'll be happy to give you a few more ideas that will 1) prevent log file blowout and 2) prevent MDF file blowout)


    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere1'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere2'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere3'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere4'),NULL,NULL,'SAMPLED';

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • back to original question as to whats going on ..  good luck

    http://searchsqlserver.techtarget.com/tip/Determining-the-source-of-full-transaction-logs-in-SQL-Server

  • Thanks, goher2000!

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Jeff Moden - Wednesday, February 21, 2018 12:44 PM

    Mike Scalise - Wednesday, February 21, 2018 9:01 AM

    Jeff, great information about all of the indexes. I never knew there were so many gotchas regarding rebuilding indexes, especially with online index rebuilds. Great information for the future!

    To be honest, rebuilding or reorganizing indexes is generally a total waste of time and a really expensive way to rebuild stats when rebuilding indexes.  Yeah, I know... now you think I'm totally whacked but I've not done general maintenance of indexes on any of my production boxes since the 17th of January, 2016 (more than 2 years ago).  Performance got better during the first 3 months and hasn't decreased since then.  I do some occasional and very surgical maintenance but I didn't even do that for over 2 years.

    If you run the following query for your top 4 (or more) largest tables (by size, not by rowcount) and provide it in a spreadsheet, I'll be happy to give you a few more ideas that will 1) prevent log file blowout and 2) prevent MDF file blowout)


    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere1'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere2'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere3'),NULL,NULL,'SAMPLED' UNION ALL
    SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('tablenamehere4'),NULL,NULL,'SAMPLED';

    Jeff,

    That's super nice of you. I may actually take you up on this offer in the near future. Again, thank you.

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply