Really large LDF file

  • When i ran

    SELECT

    db.[name] AS [Database Name] ,

    db.recovery_model_desc AS [Recovery Model] ,

    db.log_reuse_wait_desc AS [Log Reuse Wait Description] ,

    ls.cntr_value AS [Log Size (KB)] ,

    lu.cntr_value AS [Log Used (KB)] ,

    CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)

    AS DECIMAL(18,2)) * 100 AS [Log Used %] ,

    db.[compatibility_level] AS [DB Compatibility Level] ,

    db.page_verify_option_desc AS [Page Verify Option]

    FROM master.sys.databases AS db

    INNER JOIN master.sys.dm_os_performance_counters AS lu ON

    db.name = lu.instance_name

    INNER JOIN master.sys.dm_os_performance_counters AS ls ON

    db.name = ls.instance_name

    WHERE db.state = 0

    AND lu.counter_name LIKE 'Log File(s) Used Size (KB)%'

    AND ls.counter_name LIKE 'Log File(s) Size (KB)%'

    ORDER BY ls.cntr_value DESC

    Recovery Model:SIMPLE

    Log Reuse Wait Description: NOTHING

    Log Size (KB):50195000

    Log Used (KB):47202

    Log Used %:0

    DB Compatibility Level:100

    Page Verify Option:TORN_PAGE_DETECTION

  • mbender (11/8/2011)


    When i ran

    SELECT

    db.[name] AS [Database Name] ,

    db.recovery_model_desc AS [Recovery Model] ,

    db.log_reuse_wait_desc AS [Log Reuse Wait Description] ,

    ls.cntr_value AS [Log Size (KB)] ,

    lu.cntr_value AS [Log Used (KB)] ,

    CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)

    AS DECIMAL(18,2)) * 100 AS [Log Used %] ,

    db.[compatibility_level] AS [DB Compatibility Level] ,

    db.page_verify_option_desc AS [Page Verify Option]

    FROM master.sys.databases AS db

    INNER JOIN master.sys.dm_os_performance_counters AS lu ON

    db.name = lu.instance_name

    INNER JOIN master.sys.dm_os_performance_counters AS ls ON

    db.name = ls.instance_name

    WHERE db.state = 0

    AND lu.counter_name LIKE 'Log File(s) Used Size (KB)%'

    AND ls.counter_name LIKE 'Log File(s) Size (KB)%'

    ORDER BY ls.cntr_value DESC

    Recovery Model:SIMPLE

    Log Reuse Wait Description: NOTHING

    Log Size (KB):50195000

    Log Used (KB):47202

    Log Used %:0

    DB Compatibility Level:100

    Page Verify Option:TORN_PAGE_DETECTION

    Thanks for sharing. Shrinking the LDF file should not be a problem.

  • To reinforce NJ-DBA...

    There is some reason that the file grew to 50GB. Usually we say, "leave the file as big as it grows to- if it got that big, it will probably get that big again". If we keep growing and shrinking, all we are doing is creating fragmentation on the disk.

    This is definitely something to think about. It's better to have both your data and log files set to amounts which to allow for substantial growth over time rather than having the server constantly grow the file, this not only will take a fair amount of server resources, but if you're shrinking the files here and there, you'll end up having seeing performance degradation over time as the disk(s) becomes fragmented at the OS-level.

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • mbender (11/8/2011)


    Page Verify Option:TORN_PAGE_DETECTION

    Though you do want to fix that at some point.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • MyDoggieJessie (11/8/2011)


    To reinforce NJ-DBA...

    There is some reason that the file grew to 50GB. Usually we say, "leave the file as big as it grows to- if it got that big, it will probably get that big again". If we keep growing and shrinking, all we are doing is creating fragmentation on the disk.

    This is definitely something to think about. It's better to have both your data and log files set to amounts which to allow for substantial growth over time rather than having the server constantly grow the file, this not only will take a fair amount of server resources, but if you're shrinking the files here and there, you'll end up having seeing performance degradation over time as the disk(s) becomes fragmented at the OS-level.

    The OP already identified, several posts back, that the cause of the growth was a runaway query which has been repair.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Understood Jeff, but if there will be queries like that running again (safe to say that it will happen again) then wouldn't it be considered a good practice to have te log file sized appropriately?

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • MyDoggieJessie (11/8/2011)


    Understood Jeff, but if there will be queries like that running again (safe to say that it will happen again) then wouldn't it be considered a good practice to have te log file sized appropriately?

    That's their call to make. If they're certain it's fixed then act as if it is.

    If it happens again then maybe a larger size is required... on top of fixing the deployment and QA processes.

  • I did the shrink and everything looks good. It was a runaway query that i'm hoping i learned my lesson. I appreciate all your help. Thank you.

  • mbender (11/9/2011)


    I did the shrink and everything looks good. It was a runaway query that i'm hoping i learned my lesson. I appreciate all your help. Thank you.

    ya that one is called don't dev. in PROD. 😀

  • MyDoggieJessie (11/8/2011)


    Understood Jeff, but if there will be queries like that running again (safe to say that it will happen again) then wouldn't it be considered a good practice to have te log file sized appropriately?

    Sorry... didn't mean to sound short on that one. As you said, if they have queries that regularly used that much LDF then I agree. The key (for me, anyway) was that the OP identified it as a "runaway" query and implied that it was a "one off" that shouldn't occur again.

    Of course, if you or I had a proc that regularly consumed 50GB of LDF, we'd be looking very closely at what might be wrong with that bad boy unless it was actually loading 50GB or so of data.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply