VLFs in Log Files

  • Tom.Thomson (1/7/2011)


    Nice question, but: why would anyone create a 50GB log file in one fell swoop, rather than say making a logfile of 7142 MB and then extending it by that amount 6 times?

    thats the very point that makes this a good QOTD. Many people are not the genii we obviously are 🙂 and would think they were doing the best thing by pre-allocating the log file to the size they expect to need, not realising the effect on VLF. Now we have all been educated.

    So whats the best size for autogrowth on a reasonably sized log - 1GB to avoid autogrowth using too much resources and keep VLF size reasonable at 128MB?

    - and before anyone starts on about avoiding unplanned growth you should have a fall back just in case!

    ---------------------------------------------------------------------

  • I had to go back to Kimberly Tripp's excellent article for the answer to this. Well worth a read for those who haven't encountered it.

    http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx

  • Mrs. Tripp recommends 512MB segments, so growth around 8GB at a time.

    I would guess that depending on your volume, you might get better/worse performance with that size. You'd have to test.

  • Steve Jones - SSC Editor (1/7/2011)


    Mrs. Tripp recommends 512MB segments, so growth around 8GB at a time.

    I would guess that depending on your volume, you might get better/worse performance with that size. You'd have to test.

    a planned growth it would appear 8GB is a good figure, but is that best for autogrowth which might happen when the server is busy? I am sure I would get SQL timeout error messages using that size with a recommendation to reduce file growth settings.

    ---------------------------------------------------------------------

  • cengland0 (1/7/2011)


    Tom.Thomson (1/7/2011)


    Nice question, but: why would anyone create a 50GB log file in one fell swoop, rather than say making a logfile of 7142 MB and then extending it by that amount 6 times?

    I would do it. I have databases that get tables updated daily and is around 115 Gigs. I have to backup the database and shrink the logs several times a day or it gets out of hand.

    The log files get very large so why not start them out large so it can allocate the space in the beginning without having to autogrow later?

    Read Kimberly Tripp's blog posting:

    http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx

    and the links it contains and I believe you might learn why you

    have to backup the database and shrink the logs several times a day or it gets out of hand.

    as she discuss that very same problem, and recommends a solution for same.

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • I am disappointed that the correct answer is not "It Depends." 😉

    True the answer is correct if I create a 50GB transaction log file in a single growth (or as a single chunk). However, if I am following best practices I already knew not to do that and would have more VLFs due to multiple growths.

    Furthermore, if one did not follow best practices and then has to defrag the log file - one will find that defragging the log file by shrinking it as small as possible and then regrowing it out to 50GB will also change the number of VLFs in the log file.

    Otherwise, interpreting that a single chunk creation of a 50GB log file - I think this was a simple straight forward question.

    Thanks Steve

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • george sibbald (1/7/2011)


    Steve Jones - SSC Editor (1/7/2011)


    Mrs. Tripp recommends 512MB segments, so growth around 8GB at a time.

    I would guess that depending on your volume, you might get better/worse performance with that size. You'd have to test.

    a planned growth it would appear 8GB is a good figure, but is that best for autogrowth which might happen when the server is busy? I am sure I would get SQL timeout error messages using that size with a recommendation to reduce file growth settings.

    No, I would have alerts setup to prevent autogrowth except in an emergency. You want to grow this manually, at a time when the server is not busy. However you also might want to slow down a busy server with some growths, say 1GB, to prevent it from just eating all disk space, and giving you time to respond.

  • bitbucket-25253 (1/7/2011)


    cengland0 (1/7/2011)


    Tom.Thomson (1/7/2011)


    Nice question, but: why would anyone create a 50GB log file in one fell swoop, rather than say making a logfile of 7142 MB and then extending it by that amount 6 times?

    I would do it. I have databases that get tables updated daily and is around 115 Gigs. I have to backup the database and shrink the logs several times a day or it gets out of hand.

    The log files get very large so why not start them out large so it can allocate the space in the beginning without having to autogrow later?

    Read Kimberly Tripp's blog posting:

    http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx

    and the links it contains and I believe you might learn why you

    have to backup the database and shrink the logs several times a day or it gets out of hand.

    as she discuss that very same problem, and recommends a solution for same.

    Creating a 64GB Log file in one go will create VLF's of 4GB This is bad for the reasons outlined by Kimberly which Steve linked to in the answer to the QOTD. Kimberly recommended creating a log file of 64GB in 8GB chunks, restricting the 16 VLFs which would be created (per 8GB chunk) to units of 512MB in size.

  • cengland0 (1/7/2011)


    I have databases that get tables updated daily and is around 115 Gigs. I have to backup the database and shrink the logs several times a day or it gets out of hand.

    The log files get very large so why not start them out large so it can allocate the space in the beginning without having to autogrow later?

    Have you ever thought that the two of these are related? With too large of a VLF, it makes it harder for SQL toever get that VLF empty, where it can then be reused without having to grow the log file after a t-log backup.

    If you have a properly sized t-log, and are taking regular t-log backups, you should rarely, if ever, have to worry about shrinking it or having it grow. If you have to manually shrink the t-log several times a day, you have something seriously wrong - this is NOT normal.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • WayneS (1/7/2011)


    Have you ever thought that the two of these are related? With too large of a VLF, it makes it harder for SQL toever get that VLF empty, where it can then be reused without having to grow the log file after a t-log backup.

    If you have a properly sized t-log, and are taking regular t-log backups, you should rarely, if ever, have to worry about shrinking it or having it grow. If you have to manually shrink the t-log several times a day, you have something seriously wrong - this is NOT normal.

    I'm going to have to read that article a third time then. I don't really manually shrink the logs but have a SSIS package that does it. The SSIS package imports a lot of data and throughout the process, after several of the large tables are loaded, it does that transaction log backup and shrink.

    So are you saying that if the log file was created properly, I would only have to backup the transaction log and not shrink it? Not sure how to test this in a safe way because this is a production server and I'd be afraid to turn off that shrink and run out of HD space and then have an issue with not having enough space to backup the log either. That's happened before so I'm very careful what I do with these large log files now.

    Both the production and backup servers use very expensive SANS drives and we cannot get any more space allocated.

  • If the log needs the space, you're going to run out of space anywhere. A log backup allows the log space to be re-used and overwritten.

    Feeling like the log space is wasted during the week is a little silly. Are you using that space for something else during the times when the SSIS doesn't run? Or do you just think it's cleaner. Hint: it's not.

    You might read this: http://www.sqlservercentral.com/articles/64582/

    An understanding of how the log works is important so that you administer the server properly. If you have more questions, please start a new thread in the admin forum for your version.

  • cengland0 (1/7/2011)


    So are you saying that if the log file was created properly, I would only have to backup the transaction log and not shrink it? Not sure how to test this in a safe way because this is a production server and I'd be afraid to turn off that shrink and run out of HD space and then have an issue with not having enough space to backup the log either. That's happened before so I'm very careful what I do with these large log files now.

    Both the production and backup servers use very expensive SANS drives and we cannot get any more space allocated.

    Exactly. Doing a t-log backup will clear the VLFs (if possible), so that they can be reused. The t-log grows when it's out of VLFs. You should be able to run with a relatively static t-log file.

    It won't hurt to run the DBCC command in Kimberly's blog to see how many VLFs you have, etc.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • WayneS (1/7/2011)


    Exactly. Doing a t-log backup will clear the VLFs (if possible), so that they can be reused. The t-log grows when it's out of VLFs. You should be able to run with a relatively static t-log file.

    It won't hurt to run the DBCC command in Kimberly's blog to see how many VLFs you have, etc.

    And how would you test a major change like this on a production server? Our fail-over server is the one we backup the production server to and do not have a third server to backup the fail-over for testing.

    We do have several other servers lying around with smaller amounts of data but we don't have to shrink those logs because we don't dump as much data into those tables. This is the only server we have with this problem.

  • Good question and a good topic.

  • cengland0 (1/7/2011)


    WayneS (1/7/2011)


    So are you saying that if the log file was created properly, I would only have to backup the transaction log and not shrink it? Not sure how to test this in a safe way because this is a production server and I'd be afraid to turn off that shrink and run out of HD space and then have an issue with not having enough space to backup the log either. That's happened before so I'm very careful what I do with these large log files now.

    Both the production and backup servers use very expensive SANS drives and we cannot get any more space allocated.

    That's exactly what Wayne is saying.

    When a transaction log backup runs (mine run every 15 minutes) commited not in use VLFs with data in them are dumped to disk in the .trn backup file and then the VLF is cleared out for re-use. My actual .mdf database files will grow (preferably planned by my monitoring usage percentages and resizing them in off hours sufficiently to do the best I can to minimize file fragmentation), but my .ldf files won't grow or will grow when the volume of data transactions has grown to the point where the sized .ldf files I currently have can't handle 15 minutes of database usage.



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

Viewing 15 posts - 16 through 30 (of 52 total)

You must be logged in to reply to this topic. Login to reply