Database file size question

  • Hi all,

    I am working with a database from a third-party vendor that seems a bit odd to me. I just wanted to get your opinion so that I know I'm not going crazy.

    Just some facts about the database:

    • SQL Server 2016 compatibility level
    • Total size of the entire database is about 100 gigs (10gb data, 40gb indexes, 50gb log file)
    • All three files set to 10% autogrowth / unlimited.
    • T-Log backups every 30 minutes, with the vast majority 2-10 mb in size. However, at two points during the day, they are 400-800mb in size.
    • Maintenance plan on the server is solid. Backups (full, diff, t-logs) are in place, index maintenance in place, etc. Restoring backups daily to ensure they're sound, etc.

    Given the information above, I'm inclined to think the following:

    1a. The default autogrowth settings were just accepted from SQL Server 2014 (the first version of SQL Server to house this database) and then just carried through to 2016.

    1b. The autogrowth settings on all of these files should be changed to an absolute number that reflects how frequently and by how much the database grows.

    2a. The log file is bigger than the data and index files, which seems odd to me. I think that, before proper log backups were taken, the file grew and grew, and now stays at the large size.

    2b. Using the amount of space needed to rebuild my largest table as a rough estimate for the appropriate size of the transaction log, it looks like should be about 2gb. So, would you shrink it if you were me (I know, not generally recommended, but just to get it where it should be)?

    Do you agree with my analysis? Are there things I'm missing (or on the flip side, things that I'm concerned about but shouldn't be)?

    Honestly, there's not a problem per se, and I feel good about how everything is currently running, but I'm wondering if I should revisit and correct some not-so-ideal settings from a while back.

    What do you think?

    Thanks,

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • IMHO, your analysis is spot on, Mike.  You only need to decide on what the growth of each file should be in MB.

    Since the Log file did grow by 10% over time, it took 73 growths to go from 1 MB to 1GB.  I strong recommend that when you shrink the log file that you shrink it to 0 and then regrow it.  This will cut down on the fragmented growth it went through and the related VLFs, as well.  It will make restores of the database a lot faster, as well.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks, Jeff!

    I completely understand your rationale for shrinking the log file to 0 for the reasons you mentioned, but let me ask—in your experience, how resource intensive is the growth because obviously that’ll be pretty frequent initially and I don’t want users to be significantly impacted if I can help it.

     

    Thanks again!

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • jeff is correct , as are you- but avoid 10% growth at all costs, it gets painfull when you're in petabyte territory

    MVDBA

  • Ouch! Ok, sounds good. Thank you!

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Mike Scalise wrote:

    Thanks, Jeff!

    I completely understand your rationale for shrinking the log file to 0 for the reasons you mentioned, but let me ask—in your experience, how resource intensive is the growth because obviously that’ll be pretty frequent initially and I don’t want users to be significantly impacted if I can help it.

    Thanks again!

    I think you missed the part where I said "shrink it to 0 and then regrow it".  You want to shrink it to zero to clear out the tons of VLFs and "fragmentation" that the more than 73 growths by percentage have caused and then regrow to your proposed size of 2GB.  That shouldn't take very long.

    Also, Mike emphasized what I meant when I said "You only need to decide on what the growth of each file should be in MB."  You must change the growth rates to MB and NOT Percentage.  For the size of your database and your largest table, I agree that the log file should probably start at the 2GB you said and then have a 1 or 2GB (2000MB) after that.  I'd avoid 4GB growth because they've had some pretty nasty problems with that in the past.  They claim they've fixed the problem but I don't trust so easily.  For the growth on the MDF file, 1 or 2GB growth seems fine.  I would also make sure the "Instant File Initialization" has been enable unless you have some serious security reasons not to.  Most don't.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'm with Jeff on your analysis, but not on his suggestion on tackling the Transaction Log (not completely, at least!)

    I'd still go with the shrink the size to 0 (not that you'll get it that small,) setting the growth to an absolute, but I'd skip letting it grow initially, and just set the new size to something reasonable-sounding, say ~2-4GB.

    Where I'm in disagreement with Jeff is, MS changed the algorithm for how many VLFs get created based on the size and growth rate as documented here:  https://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/

    So based on that, I'd maybe start with a 1GB log, then manually grow it a couple times up to 4GB (or 2GB) and see how things go.

    • This reply was modified 4 years, 11 months ago by  jasona.work. Reason: Saw Jeff's second post, now I'm with him
  • jasona.work wrote:

    I'm with Jeff on your analysis, but not on his suggestion on tackling the Transaction Log (not completely, at least!)

    I'd still go with the shrink the size to 0 (not that you'll get it that small,) setting the growth to an absolute, but I'd skip letting it grow initially, and just set the new size to something reasonable-sounding, say ~2-4GB.

    Where I'm in disagreement with Jeff is, MS changed the algorithm for how many VLFs get created based on the size and growth rate as documented here:  https://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/

    So based on that, I'd maybe start with a 1GB log, then manually grow it a couple times up to 4GB (or 2GB) and see how things go.

    To be clear, please reread.  I've NOT suggested letting it automatically grow from zero anywhere in my recommendations.  I explicitly stated that for this database, I'd regrow it to 2GB and then see what happens with growth.  I expect it to not grow from there much at all, if any. 😉  If it does grow a lot, resting to zero and growing to that in no more than 8GB steps would be the "rule of thumb" method to go.  If it grows back to 50GB, then someone needs to find the junk code doing that and fix it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    To be clear, please reread.  I've NOT suggested letting it automatically grow from zero anywhere in my recommendations.  I explicitly stated that for this database, I'd regrow it to 2GB and then see what happens with growth.  I expect it to not grow from there much at all, if any. 😉  If it does grow a lot, resting to zero and growing to that in no more than 8GB steps would be the "rule of thumb" method to go.  If it grows back to 50GB, then someone needs to find the junk code doing that and fix it.

    I think I was posting while you were posting and didn't see your second reply!

    I take back my disagreeing with Jeff!  😉

    • This reply was modified 4 years, 11 months ago by  jasona.work.
  • Heh... like I said, I even said to immediately regrow it to 2GB in my first post on the subject. 😀

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Mike Scalise wrote:

    Thanks, Jeff!

    I completely understand your rationale for shrinking the log file to 0 for the reasons you mentioned, but let me ask—in your experience, how resource intensive is the growth because obviously that’ll be pretty frequent initially and I don’t want users to be significantly impacted if I can help it.

    Thanks again!

    I think you missed the part where I said "shrink it to 0 and then regrow it".  You want to shrink it to zero to clear out the tons of VLFs and "fragmentation" that the more than 73 growths by percentage have caused and then regrow to your proposed size of 2GB.  That shouldn't take very long.

    Also, Mike emphasized what I meant when I said "You only need to decide on what the growth of each file should be in MB."  You must change the growth rates to MB and NOT Percentage.  For the size of your database and your largest table, I agree that the log file should probably start at the 2GB you said and then have a 1 or 2GB (2000MB) after that.  I'd avoid 4GB growth because they've had some pretty nasty problems with that in the past.  They claim they've fixed the problem but I don't trust so easily.  For the growth on the MDF file, 1 or 2GB growth seems fine.  I would also make sure the "Instant File Initialization" has been enable unless you have some serious security reasons not to.  Most don't.

     

    Jeff, I did miss that second part, so my apologies. So you're suggesting shrinking it to 0 to clear up any fragmentation issues, then resize it to the 2gb I was planning to, then seeing how things look?

    Also, regarding autogrowth (not initial size), did you have a recommendation there for the three files? I am thinking 256MB for the primary and secondary data files, and 1gb for the transaction log--so that we get away from percentages (especially at this size of 50+gb)...

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Mike Scalise wrote:

    Jeff, I did miss that second part, so my apologies. So you're suggesting shrinking it to 0 to clear up any fragmentation issues, then resize it to the 2gb I was planning to, then seeing how things look?

    Also, regarding autogrowth (not initial size), did you have a recommendation there for the three files? I am thinking 256MB for the primary and secondary data files, and 1gb for the transaction log--so that we get away from percentages (especially at this size of 50+gb)...

    Mike

    I think that's what he's suggesting, yes.

    As for the growth rates, I might go with 1GB across the board.  Some of it would depend on your storage and server settings (do you have Instant File Initialization enabled?) as to what you might go with.  IFI will help with the data files, but not the log, so if you have it enabled, a bigger growth rate for the data files is less of an impact.

  • jasona.work wrote:

    Mike Scalise wrote:

    Jeff, I did miss that second part, so my apologies. So you're suggesting shrinking it to 0 to clear up any fragmentation issues, then resize it to the 2gb I was planning to, then seeing how things look?

    Also, regarding autogrowth (not initial size), did you have a recommendation there for the three files? I am thinking 256MB for the primary and secondary data files, and 1gb for the transaction log--so that we get away from percentages (especially at this size of 50+gb)...

    Mike

    I think that's what he's suggesting, yes.

    As for the growth rates, I might go with 1GB across the board.  Some of it would depend on your storage and server settings (do you have Instant File Initialization enabled?) as to what you might go with.  IFI will help with the data files, but not the log, so if you have it enabled, a bigger growth rate for the data files is less of an impact.

    Correct on what I was suggesting and yes on the 1GB growth across the board.  At least until we know what the log file is going to do.  Might have to go through one more resizing drill there.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • So to address the IFI question, yes I do have IFI enabled.

    The 1 GB auto growth on the data file doesn't concern me as much since IFI should mean this would be a fairly quick operation that does't take a ton of time/resources. However, since log files don't leverage IFI, I'm not sure how that'll be. Then again, given my current t-log activity, perhaps it won't even go beyond the 2GB size...at least unless there's other issues that are causing bloated logs...if I'm understanding/thinking about it the right way.

    Also, thanks again for each of your guys' input as I work through this. I appreciate it.

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Mike Scalise wrote:

    Then again, given my current t-log activity, perhaps it won't even go beyond the 2GB size...

    That, good Sir, is precisely what I'm thinking.  If it does grow, 1GB growth spurts aren't going to be real obnoxious.

    In a month, you'll know what size it really needs to be and you might want to do the "shrink to zero and regrow to the right size" one more time (possibly in rapid successive increments) to get the VLFs just right, especially for the effect they have on restores for DR.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply