Transaction log file growth estimation

  • Is there a way to estimate the transaction log growth in different recovery models? We know the OLTP daily estimates, but so far are unsuccesssful in correlating the file stats number between data and log writes to disk.

    thanks in advance.

    cheers!

  • G33kKahuna (10/25/2009)


    Is there a way to estimate the transaction log growth in different recovery models? We know the OLTP daily estimates, but so far are unsuccesssful in correlating the file stats number between data and log writes to disk.

    thanks in advance.

    cheers!

    I'm sorry, but your question really doesn't make a lot of sense. You generally don't track the usage this way for transaction logs.

    What you need to do is identify how much data loss is acceptable to the organization, then set transaction log backups for that increment. Once you have done that, the transaction log will grow as large as it needs to handle the number of transactions in that time frame.

    Most people monitor that usage over a full business cycle - then manually grow the file out a bit larger and leave it alone.

    If you do not need full recovery model - then you would use the simple recovery model and again, the log file will grow as large as needed to handle the largest transaction.

    You can review the link in my signature to better understand how to manage your transaction logs.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeff,

    Thanks for your response. Actually, this question is very common when you are an ISV. This question comes with a lot of merit. As a part of any ISV implementation, every customer needs to budget and plan for their DB disks. I don't think a guidance answer will cut it .... Also, if I can forecast the data file size; its only fair to be able to do the same with transaction logs. Especially, if the software generated transactions are bloating the file

    Our guidance is to assume same size log size as data when operating on simple recovery; this is based on our lab observations. The problem is that it never falls in the ballpark. For ex: 5G data write to the disk translated to 500M log file write whereas 2G data file writes translated to 10G log writes. It doesn't stay the same and varies by the day. This seems more erratic and random than logical. I cannot imagine there is no scientific estimation process.

    Thanks in advance

  • I don't think that trying to determine how large the transaction log is going to get is going to work for you. It all comes down to the pattern of the transactions being performed.

    You can have thousands of insert transactions - each one no more than a 100K, with transaction log backups running every fifteen minutes (full recovery model). Or, you can have 100's of updates, each one averaging 10K - or you could have just a couple of large insert/updates where there are a million rows inserted/updated in a single transaction.

    Without understanding how the system is going to actually be used - the number and type of transactions there is no way to figure how much space is going to be used. And, even if you could get a solid number - what happens when the business takes off and they triple in the number of transactions? What happens when they expand their business and the data they decide to capture increases?

    It's not a simple matter of saying that 100 transactions per second will result in xx amount of space usage in the transaction log. It is more along the lines of number of transactions, type of transaction (updates use more than insert, deletes use more than updates - generally), size of the transaction, recovery model (how often do you backup the transaction log), etc...

    Basically, it comes down to monitoring the size based upon the usage and requirements.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • As Jeffrey mentioned, log size is completely dependent on the level and rate of activity. If you have users running in Simple mode, the peak may be smaller than in other modes, but the rate of transactions between checkpoints would matter.

    In other modes, the backup frequency has a large impact on the log size to set.

    In any case, it's observation that allows a DBA to determine how large to make the log. Outside of that, you're making a wild guess. Unless you have a similar application which to observe.

    For an ISV, I would think you'd need to collect some data about customers, or in the lab, from different ways the application is used, and then provide a few scenarios for sizing.

    I might also suggest that you provide some guidelines for re-sizing the log after a week or month of activity.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply