Transaction log usage

  • Hi there, I am trying to determine the cause of high Transaction log usage within a SQL 2008 R2 database. Does anyone know a good way of determining how much transaction log space is used for the execution of a stored procedure or a statement within a stored procedure. Is this possible via profiler or via DMV?

    SQL Server 2008R2 Ent

  • Do you mean large in terms of the activity? Or is the log just large all the time? Do you have log backups running?

    The log writes down records based on what the activity is. There is some amount of stuff you can get from fn_dblog. This post has some outlines of getting information from the log: http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(1930)-TRUNCATE-TABLE-is-non-logged.aspx

    or this: http://www.sqlskills.com/blogs/paul/post/Search-Engine-QA-6-Using-fn_dblog-to-tell-if-a-transaction-is-contained-in-a-backup.aspx

  • Hi Steve, thanks for the reply. The issue is that I have a db which is in simple recovery. The transaction log file is on a fast disk separate from DB files. The database log size often grows to 8GB which I feel is too large for the amount of db DML activity. I can see from the log_reuse_wait_desc is usually nothing or occaisionally ACTIVE_TRANSACTION. I am trying to determine what stored procs are causing the large growth as they may be doing unnecessary updates.

    I did see sys.dm_exec_procedure_stats has a total_logical_writes so I think this may point me in the right direction. I will also look at the fn_dblog() function.

  • Also keep this in mind. If you do a large transaction, maybe a large update. You need log space for the update PLUS log space for the rollback. That's because you don't want to be allocating space during the rollback. So it's allocated first.

    If you need 8GB, I would leave it there. The log is a fairly bulletproof mechanism. While you might be able to tune your code, I'm not sure I'd be overly concerned over an 8GB log.

  • EDIT: Erasing what I just said because it's not related to the question. DOH.

    If your growth occurs at a certain point in the day, can't you use profiler to identify the offending procs and then use the Execution Plan to get an idea of the # of rows you're dealing with?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Steve Jones - SSC Editor (3/16/2011)


    Also keep this in mind. If you do a large transaction, maybe a large update. You need log space for the update PLUS log space for the rollback. That's because you don't want to be allocating space during the rollback. So it's allocated first.

    It's allocated first because SQL absolutely must not run out of log space during a rollback. If it did, it would have to take the DB suspect.

    I second Steve on the log size. If the log is reaching 8GB it needs to be that size, so don't go shrinking it (it will just regrow)

    Check your maintenance tasks, index rebuilds and reorganise, any data imports, etc.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply