Estimating LDF file size

  • Folks, Im in confusion here, we have about 30 database's that are in FULL recovery model in our PROD & STG environment and have FULL backup being done every night before which a shrinkdb job runs to shrink the data file and log file size. There is no transaction log file backup. This setup was done long before and Ive been asked to revisit and improve it.

    My manager has asked me to review the size of the ldf file before and after the nightly backup and based on that if I notice the size of the ldf file grows till 150MB from 10MB. Increase the intial size of the ldf file to 150 MB instead.Also he asked me to review if there is a need to execute the shrinking of log and data file job afterwards.

    Is it a good idea to set the size of the ldf file to what the max it can reach ? What are the pros and cons of doing this ? Does this have something to do with fragmentation. Also is shrinking the data and log file everyday before the full backup good ?

  • I executed DBCC LOGINFO(@dbname) on all the 30 databases's and I see lots of VLF with status =2 (In Use).Here are the questions that I have

    1) Out of 30 database's , there 6 db's that are involved in Data Warehousing.Nightly data load jobs run after the full bak. Currently set at FULL recovery model. What is the advised recovery model for DW database's ? I was thinking of setting it to SIMPLE since they are no transaction and have no direct user interaction, apart from the SSIS packages that run nightly with huge data dumps.But what if in the middle of the DW job, some failure happens and I need to recover, what would be the impact then ?

    2) For the transactional database's.Ive tought of increasing the ldf file size based on the analysis(ldf growth) that Ive been doing for past 3 days. If I were to set the ldf file large enough.It would not create many VLF which I understand from reading some sites are not good , and instead of shrinking the database nightly.I would run transaction log backup at a frequent interval. The other question that I have is that if Ive already enabled transaction log every hour maybe I dont have increase the ldf file size. Since the ldf file will not grow as much ?

    Kindly advise

  • For databases in full or bulk logged modes, you need to run log backups. That is the only way to backup the log records and reuse the space.

    The log size you need is the log size you need for the amount of transactions between log backups. The only way to determine this is set a schedule of log backups, and find the largest size in there and set the log to that. The total amount of log backup is the same per day, regardless of schedule, but more frequent log backups mean that you can potentially have a smaller log file size. Test and set this, with some pad. You should not be shrinking log files regularly, only when you have an event that causes unusual growth.

    For data files, do not shrink them. There should be free space in the files to allow for normal data growth. I typically set them to enough space for 3-4 months, and check on space every month. Shrinking causes fragmentation. Shouldn't be done unless again, a rare event (like deleting lots of data).

    If your DW database, or any database, does not need point in time recovery, then simple mode is fine. No need for log backups then, but set the log to the largest size you need based on a single large transaction (often from loads).

    In terms of the loads, if you have an issue, you recover to the full backup, then re-run the load. You could potentially recover to the middle of a load in full or bulk mode, but can your packages restart the load in the middle? If not, then this might not be worth worrying about.

  • Steve Jones - SSC Editor (8/10/2011)


    For databases in full or bulk logged modes, you need to run log backups. That is the only way to backup the log records and reuse the space.

    The log size you need is the log size you need for the amount of transactions between log backups. The only way to determine this is set a schedule of log backups, and find the largest size in there and set the log to that. The total amount of log backup is the same per day, regardless of schedule, but more frequent log backups mean that you can potentially have a smaller log file size. Test and set this, with some pad. You should not be shrinking log files regularly, only when you have an event that causes unusual growth.

    For data files, do not shrink them. There should be free space in the files to allow for normal data growth. I typically set them to enough space for 3-4 months, and check on space every month. Shrinking causes fragmentation. Shouldn't be done unless again, a rare event (like deleting lots of data).

    If your DW database, or any database, does not need point in time recovery, then simple mode is fine. No need for log backups then, but set the log to the largest size you need based on a single large transaction (often from loads).

    In terms of the loads, if you have an issue, you recover to the full backup, then re-run the load. You could potentially recover to the middle of a load in full or bulk mode, but can your packages restart the load in the middle? If not, then this might not be worth worrying about.

    Hi Steve, Thanks for your response . I checked with our DW team and according to them, the DW db's get truncated and reload everyday due to business logic and in the event of failure all they will need to do is to restart the DW load jobs. So I guess 'SIMPLE' should be good enough.

    As far as predicting the log file size. In Mgt Studio, right click database > reports > std reports > disk usage .I have been analysing the log file growth right before the full backup takes place and how it changes after the full bak.Is this same as configuring the transaction log file backup and monitoring the size ?

  • No.

    Please go and read the article that I linked for you over at SQL Team.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/10/2011)


    No.

    Please go and read the article that I linked for you over at SQL Team.

    I don't see the link, could you repost, from PM I guess.

    Wow, need a break!

  • Hi Gail /Steve ,

    Ive started on the transaction log backup and will keep you posted if I have any more issues,

    I wanted more clarification as my manager has been insisting that you can also monitor the log file size without doing a transaction log backup using Mgt Studio, right click database > reports > std reports > disk usage .

    And I wanted to confirm that, Thanks.

    Here is the site that Gail proposed reading http://www.sqlservercentral.com/articles/64582/

  • That report is fine, but how do you intend on scaling that to 30+ dbs if you don't have a script to run?

    DBCC SQLPERF(logspace) will give you that. Of course it's just a best guess and it needs to run before all log backups to keep "accurate" history.

  • You can use any method, just set a peak size and leave it at that.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply