Log File Issues

  • Hi

    I know all DB environments are different to some extent, but can anyone tell me if it is normal in a production environment for the log files to grow to larger than the DB files.

    To give you a bit of background, the application hooking into these dbs is primarily read intensive not write. The dbs are set to full, with log shipping set up to run every half hour.

    An example of one Db: .mdf file is 11.3Gb and with only 16% free (looking at utilization in the shrink file option) and the log file is 15Gb with 99% free.

    We keep shrinking the files down as and when needed but we have over 1000 DBs to manage, previously we've had enough disk space for this not to be an issue but now its getting to be a problem.

  • yes it can happen. as you DB is set to FULL Recovery mode.. Everythign is logged in the LDf File..

  • yeah but with logs being shipped every half hour shouldnt that clear out the log file? It shouldnt need to grow that big surely.

    I know it gets to silly levels when we're not running the job to create the transaction log backups but when that happens the log file shows that its actually using all the space allocated to it, and i have to truncate the logs to fix it.

    With our log files they have silly amounts allocated but most of the time are only using 1% of it.

  • for some days observe the log file capacity being utilized and for once shrink the log file to that level, leaving some free space in it. It seems there are some process which do bulk amount of data insert or some huge insert/update/delete queries that run. You can check that out from the size of the log backup(what time such processes run).



    Pradeep Singh

  • reindex jobs can produce a lot of transaction log activity, especially if a database is dominated by one large table. check log usage during your reindex window.

    ---------------------------------------------------------------------

  • Also, you should not be shrinking the log regularly. If they need that much space, then they need it. Log shipping results in regular log backups, that allow space to be reused in the log files, but it's possible that you need more space to handle things like reindexes.

    That does sound weird, however, for primarily read intensive databases. Either you have log shipping issues (make sure the jobs are running), or you are doing lots of activity that is writing to the log.

  • george sibbald (7/6/2009)


    reindex jobs can produce a lot of transaction log activity, especially if a database is dominated by one large table. check log usage during your reindex window.

    It does appear to be the reindex job that gobbles up all the space, during the day, even when the files have been shrunk to a reasonable size the free space doesnt change, its when the reindexing job kicks in at 2 in the morning that it kills it.

    Is there any way to resolve this? or is it just a case of putting it on a bigger server? as that would mean reshuffling a lot of databases to try and get it somewhere that it would fit ok. I did wonder if perhaps the indexes themselves could be the problem? but i dont really know enough about them to make a guess.

  • another point that may or may not be useful. there are a couple of databases that have a 5gb log file, when you use DBCC SQLPERF(logspace) it says its using 1% of that but when you try and shrink it the file size doesnt change.

    I used DBCC OPENTRAN to check that there was nothing in there causing it but it says there are no open transactions, so how come it wont let me shrink the file?

    Sorry if these are all stupid questions, i'm basically a systems admin but our DBA left so i'm now having to try and sort this stuff out. :unsure:

  • lindsey.keen (7/7/2009)


    george sibbald (7/6/2009)


    reindex jobs can produce a lot of transaction log activity, especially if a database is dominated by one large table. check log usage during your reindex window.

    It does appear to be the reindex job that gobbles up all the space, during the day, even when the files have been shrunk to a reasonable size the free space doesnt change, its when the reindexing job kicks in at 2 in the morning that it kills it.

    Is there any way to resolve this?

    A number of things, check out BOL for sys.dm_db_index_physical_stats, there is a script in there to only defragment indexes that require it, that should help reduce actual amount of work done.

    Also you can try all or some of:

    break up the indexes you do to different days

    put a transaction log backup between reindexing at various points

    set up a sqlagent performance alert to back the log up if it hits a certain size full

    set recovery mode to bulk_logged for duration of reindex and reset to full afterwards

    ---------------------------------------------------------------------

  • lindsey.keen (7/7/2009)


    another point that may or may not be useful. there are a couple of databases that have a 5gb log file, when you use DBCC SQLPERF(logspace) it says its using 1% of that but when you try and shrink it the file size doesnt change.

    I used DBCC OPENTRAN to check that there was nothing in there causing it but it says there are no open transactions, so how come it wont let me shrink the file?

    Sorry if these are all stupid questions, i'm basically a systems admin but our DBA left so i'm now having to try and sort this stuff out. :unsure:

    firstly, try not to shrink it if you can avoid doing so.

    Are you using shrink database or shrinkfile? shrink database cannot shrink the file below its original create size, shrink file can. run dbcc loginfo(dbname), that will tell you where the used space is. Any row with a value of 2 in the status column is a virtual log that has active transactions. you will not be able to shrink past that point. backup the log and rerun the shrink, otherwise you have to wait for the virtual log to cycle round.

    ---------------------------------------------------------------------

  • And stop truncating your logs manually

  • switch your recovery model to Bulk logged before the reindex work and then back to full afterwards. then see what the maximum log usage is after that for a couple of days and resize the log accordingly.

    As others have stated, its hurting performance when you keep shrinking the logs and then letting them auto grow.

  • Thanks for the suggestions, i'll have a look at giving some of them a try. We're currently reindexing when the avg fragmentation is above 10%, do you think it would cause much of a performance hit if we used a slightly higher percentage, or is this the accepted standard?

  • General rule of the thumb is,

    30% rebuild

    EDIT - The % denotes index fragmentation.



    Pradeep Singh

  • ps (7/7/2009)


    General rule of the thumb is,

    30% rebuild

    thanks for that ps.

    I know manually truncating the logs is hurting performance, and shrinking isnt a good idea, but when i come in at 7am and people cant trade because i have no log space i have to fix the problem fast. I'm only doing this temporarily whilst I figure out the best way to resolve the issue in the long term.

    Animal magic - you mentioned setting the dbs to bulk logged for the reindexing job, will this not screw up the logshipping though? or if i delay the logshipping job so that it restarts after the reindex job finishes would that work? I thought you always needed to take a full backup after changing back to full, and our backups are taken before the reindex job runs.

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply