Transactional Log file issue

  • Dear All,

    There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following please:

    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?

    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?

    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?

    4. How often should the update stat job should run please?

    Thank you in advance!

  • 1. Or, if you have space on another drive, add a second log file. While this won't free up space on the full drive, it would get you back up and running quickly. Otherwise, yes, back up the log, then shrink the file.

    2. No way to really answer, as it depends on your environment. Ideally, you want to set the log to be as big as it needs to be, and leave it.

    3. Depends on your SLAs and recovery time objectives. More frequent *might* help with the log growing to fill the drive if what's causing it to fill is a lot of short transactions in that time span. If it's one big transaction that runs for a long time, then more frequent won't help.

    4. Again, depends on your environment and how much "churn" in your indexes / tables. In my environment, once a week is reasonable (not a lot of churn, and I could probably even go to bi-weekly) Seen posts from others where they have to run daily, and would do more often if they could...

  • tt-615680 (1/3/2014)


    Dear All,

    There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following please:

    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?

    yes, it would be, but do not shrink the log to a size smaller than it needs to be to support normal operations. If you have an open transaction the backup will not free any space though,find out what caused the growth in the first place, running dbcc opentran against the database before you backup the log may help here

    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?

    for a database that size definitely set in MB not a percentage. Are you expecting continuing growth? Try and anticipate and grow manually to accommodate future growth. Allow the service account SQL runs under to have perform volume maintenance rights via secpol.msc. Go for say 10GB as a guess.

    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?

    If you are suffering log bloat than very possibly. Try to find the cause though of log bloat though. A DB that size it could be the reindexing job, its a frequent culprit.

    4. How often should the update stat job should run please?

    [/quote]

    er...it depends...on how much the tables are updated as a percentage of their total row count. If you see queries losing performance quickly and they refer to particular tables, updating their stats frequently often helps here. Tables that insert heavily in singletons in an ever increasing fashion often need daily or more update stats as the important, frequently accessed data is at the end of the table, but auto update is never kicked off to include the new values.

    Update at least weekly would be rule of thumb (remember alter index rebuild updates stats, alter index reorganize does not)

    You will need to get to understand activity on your database, two scripts to help:

    -- When were Statistics last updated on all indexes?

    -- Helps discover possible problems with out-of-date statistics

    -- Also gives you an idea which indexes are most active

    SELECT o.name, i.name AS [Index Name],

    STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date],

    s.auto_created, s.no_recompute, s.user_created, st.row_count

    FROM sys.objects AS o WITH (NOLOCK)

    INNER JOIN sys.indexes AS i WITH (NOLOCK)

    ON o.[object_id] = i.[object_id]

    INNER JOIN sys.stats AS s WITH (NOLOCK)

    ON i.[object_id] = s.[object_id]

    AND i.index_id = s.stats_id

    INNER JOIN sys.dm_db_partition_stats AS st WITH (NOLOCK)

    ON o.[object_id] = st.[object_id]

    AND i.[index_id] = st.[index_id]

    WHERE o.[type] = 'U'

    and STATS_DATE(i.[object_id], i.index_id) is not NULL

    ORDER BY STATS_DATE(i.[object_id], i.index_id) ASC;

    -- shows stats not updated since a configurable time and shows how many modifications to index since last update date

    SELECT OBJECT_NAME(id) as 'table',indid,name,STATS_DATE(id, indid) as 'date',

    rowmodctr, rows as 'rows in table'

    FROM sys.sysindexes

    WHERE STATS_DATE(id, indid)<=DATEADD(DAY,-8,GETDATE())

    AND rowmodctr>0

    and name not like '_WA%'

    AND id IN (SELECT object_id FROM sys.tables)

    order by STATS_DATE(id, indid)

    ---------------------------------------------------------------------

  • Oh, and growth factor for the log, get the log set to the size it needs to be to accommodate all activity first so hopefully it does not actually need to grow, then depending on that size, whatever seems sensible, a minimum of 64Mb, then multiples of that up to 1GB, above that multiples if 1GB up to a maximum of 8GB.

    Whats your current log growth factor, if it is the default of 10% you likely have way too many vlfs (virtual log files). check this by running dbcc loginfo(dbname). the number of rows returned id the number of vlfs. If this is more than few hundred you have too many and its worth trying to reduce the number, do this by shrinking to the minimum you can and then growing the log to the size you want it in one chunk or sensible chunks depending on the size you want. Grow in no more than 8GB chunks.

    ---------------------------------------------------------------------

  • If you feel like your log files are getting too large, you probably should take the backups more frequently. But, that will mean you have to manage more backup files and the space that they take up.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (1/3/2014)


    If you feel like your log files are getting too large, you probably should take the backups more frequently. But, that will mean you have to manage more backup files and the space that they take up.

    more files but same total space surely?

    ---------------------------------------------------------------------

  • Yeah, same space. At least it should be.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • if its not, start worrying.

    ---------------------------------------------------------------------

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply