TLOG file: best percentage size of DATAFILE

  • hi all,

    i would like to know if there is an ideal percentage for sizing tlog file rapported to the sum of datafiles size.

    thanks

  • There is no simple answer. The allocation of the size of log files depends on your applications, such as read, write, ...

  • The SQL_Oracle is right. There is no answer.

    The default works well, but it depends. The log grows because of changes in your data. If you have mostly static data, like a read only db, then a 1MB log might work. If you have a highly transactional system, with lots of changes, like an warehouse site, where the number of rows might not grow as much as changes occur to the row, you might have a 10GB log for a 5GB data file.

    The thing I'd watch for is the growth of your log file and use that for sizing if you have no idea. The other thing that matters is how often you back up the log file. If I back the log file up every 15 minutes, and the backup is 10MB, then I might get by with a 20MB log file, which gives me some padding. If I only backed that file up every hour, the backup might be closer to 40MB, so I'd need a 45-50MB file.

    The thing you want to prevent in SQL 2000 is file growths. They pause the DB while they occur, so be sure you allow enough space in the log to cover your transactions at a busy period.

  • There truly is no 'best practice'. The only answer is #1 on the Top Ten List of answers you do not want to hear from your DBA - It depends. What I can offer are some questions, explanations and suggestions to get you started.

    • Have you modifed the 'default' size parameters of your database ?

    • Space Allocated
    • Auto Growth
    • Growth Percent or Mb
    • Growth restrictions
  • How big is data portion of your database initially going to be ?
  • How big is the data portion projected to grow in your database ?
  • When you first create a database you get the 'defaults'. If you are going to have a very small database, less than say 100 Mb, or a powerful server with little activity you are probably OK. If not, then you may want to modify the default parameters as they relate to growth. As Steve mentions automatic growth 'pauses' database activity until the extra space is added. This pause occurs whether the growth occurs in the data portion or the transaction log portion. Aside from the fact that your users may experience a 'temporary' delay in processing or a slowdown, there is another adverse affect. Over time this 'autogrowth' will fragment you disks causing the OS to work harder and possibly slow down the entire server as well.

    Now in order to get rid of the 'defaults' you have to have some idea of how large the data portion of the database is going to be initially. For this example let's say the database data will occupy 1000 Mb initially. Now that we have an initial data portion size we have to ask what type of application processing is this database supporting. Will it be OLTP, batch, DSS (Decision Suport System) or DW (Data Warehouse). The answer to the type of application will determine the initial starting size of your transaction log. Now I say 'initial' it is just a 'guesstimate', or someplace to begin. Actual monitoring of the database and application will allow you to make the final adjustments. Now back to the initial size. Based on the type of application your transaction log starting size is a percentage of the application type. So based on a 1000 Mb of initial data your transaction log initial size could be:

    • OLTP - 100-200 Mb
    • batch - 200-300 Mb
    • DSS - 300-500 Mb
    • DW - 1000 Mb

    Again these are starting points because no 2 databases or servers perform  alike. You'll need to schedule regular transaction log backups. I'd start at every 30 minutes initially. While this is occurring you can either check the transaction log backup file size or use DBCC SQLPERF(LOGINFO) just prior to the transaction log backup just prior to its execution to guage the size of the transactions backed up. With this information you can then adjust the size of the transaction log and the frequency of the backups. Another performance hint is you never want to repeatedly shrink your transaction log. If it grows to a certain size throughout the day with frequent backups and is little used the remainder of the day, leave it alone otherwise you will have a daily slow down. You should only shrink is initially to get it into a 'reasonable' size which is based on your observations. Remember our enemy fragmentation ?

    Now lets address database growth. This most probably will have an impact on the size and frequency of your transaction log maintenance. For out example we will say that over the next 12 months the database data portion will grow by 500 Mb. Now we have to go back to the question we answered about what type of application your database supports.

    • OLTP - no change - 100-200 Mb - online transactins ausually short and small. A larger database just means more data is stored but not manipulated in your transactions. However a large increase in your user community can tip this.
    • batch - maybe an increass to 400-500 Mb - this is because batch processes will load and manipulate more data so your logs will be larger.
    • DSS - maybe an increase maybe not - 300-500? Mb - Decision Support Systems usually encompass larger and larger amounts of information to generate reports. The results will grow but not nearly as much as the underlying data that supports them.
    • DW - 1000 Mb - probably no growth - this is because ETL processes involved in DWs generally have well defined feeds of consistent size. So unless your ETL processing changes you should be set.

    Another reminder, these are starting points. Only monitoring the behavior of your database, its transaction log and the application gives you the 'true' figures that you will need to size things reasonably.  

    Now lets put together a database creation script for just one database. lets use the OLTP one as an example:

     use master

     go

     create database My_OLTP_DB on (

      name=My_OLTP_DB,

      filename='G:\MSSQL\DATA\My_OLTP_DB.mdf',

      size=1000Mb,

      maxsize=2001b,

      filegrowth=500Mb

    &nbsp

     log on (

      name = 'My_OLTP_DB_log',

      filename='H:\MSSQL\DATA\My_OLTP_DB_log.ldf',

      size=100mb,

      maxsize=201mb,

      filegrowth=50mb

    &nbsp

     go

    ---

     alter database My_OLTP_DB set recovery full

     go

    ---

     exec My_OLTP_DB..sp_changedbowner 'sa'

     go

     checkpoint

     go

    One final point. You will notice that I exceeded the 1000 Mb + 500 Mb for the database data portion by another 500 Mb. Well you never want to run a database at full capacity (would you run your PC with the C: drive that way ?). There will be no room to perform database preventative maintenance like index rebuild and table alterations should they be needed. Likewise the transaction log has been added to by an appropriate percentage of space as well. Many people believe in allowing 'unlimited' growth - but are your disks unlimited ? If you know your application, its behavior and your users usage of the application you then know how your database will behave (grow).  I do not believe in 'unlimited' growth and neither should you now that you are armed with the knowledge that you need.

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply