Which command to reduce log fle ?

  • Hello Everybody,

    I have a question concerning Log File :
    I have to reduce Log file because it's Growing, I'm thinking, this command could be OK :
    ---------------

    Use master

    Go

    Alter database toto set single_user with rollback immediate 

    Go

    Use toto

    Go

    dbcc shrinkfile('toto_log',1024)

    Go

    Use master

    Go

    Alter database toto set multi_user with rollback immediate

    Go
    --------------

    but the software Editor reply me it's better to use :

    --------------

    --Reduce Log-------------------------------------------------------------------

    BACKUP LOG toto TO DISK='nul:'

    DBCC SHRINKFILE('toto_log', 0, TRUNCATEONLY)

    Go

    --Initial parameter for Log Size (1Gb)--------------------------

    ALTER DATABASE toto MODIFY FILE (NAME = toto_Log, SIZE = 1024MB)

    Go

    -----------------
    And I see in other Web Site, it's better to use :
    https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql
    ---------------
    USE toto;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE toto
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 Gb.
    DBCC SHRINKFILE (toto_Log, 1024);
    GO
    -- Reset the database recovery model.
    ALTER DATABASE toto
    SET RECOVERY FULL;
    GO

    What do you think about this ? What is the best method ?
    PS : Microsoft SQL Server 2008 R2 (SP2) - 10.50.4042.0 (X64) Standard Edition (64-bit)

    Thank you very much
    Anthony

  • Anthony

    Don't do either.  Not yet, at least.  If you're worried about the growth of your log file, make sure you are making frequent log backups (unless you're in Simple recovery mode).  Then, if the log file is still growing, find out why, and fix it.  If you shrink the log now but don't do anything about the reasons for the growth, it'll grow back, wasting resources and causing physical fragmentation on your disk.

    John

  • The log is growing because it needs to, if you reduce it it will only grow again. The only time I would shrink the log file is if there had been a one off large update/insert/delete otherwise leave the log alone and add more capacity to the disk the log is on.

    Thanks

  • Just to pile on, in general, you should figure out why the log is growing before doing anything else. Also, the mere fact that the log is growing is not necessarily a problem.

    Before I forget, I'd highly recommend giving the book at https://www.simple-talk.com/books/sql-books/sql-server-transaction-log-management-by-tony-davis-and-gail-shaw/ a read. That will give you a solid basis for understanding these sorts of situations.

    Back to the actual issue.

    If the growth is just from normal activity, then you may just need to size your files and disks appropriately for that activity. There's no benefit to generating all the IO to shrink the log file just to have it grow out again (more IO, transaction delays) for that activity again.

    If the database is using the full recovery model, depending on the activity pattern you might be able to mitigate the growth with more frequent log backups (if growth is primarily from large, single transactions, then this won't help so much).

    If the activity is unusual, then you'll want to address that.

    As to those specific methods of shrinking the log file, they each have  some pretty undesirable side effects.

    Switching the DB to single user might help by stopping current activity on the log, but you're basically killing all existing connections (how serious that is depends on business needs around that instance).

    It would be better to size the log appropriately and/or change log backup schedule appropriately (if normal activity is the cause), or directly address the unusual behavior causing growth (if it's an atypical workload).

    The other two will affect your ability to do point-in-time restores for databases using the full recovery model.

    Option 2 (backup to NUL) is the more insidious one, because it'll then let you continue to take log backups that you won't be able to use (because some of the log was thrown away to NUL).

    Option 3 will also throw away a portion of your log, but it will at least not let you take a log backup until you've taken a new full/diff, which you could then use in subsequent restores.

    In short, you need to figure out the following:

    1) Is the growth actually a problem? It might not be. If it is, you need to figure out what's causing it.

    2) If the growth is a problem and is caused by your normal workload, then you might just need to size the log file and drive appropriately, or you might be able to mitigate the growth with more frequent log backups, depending on exactly what is driving the growth.

    3) If the growth is a problem and is caused by an atypical workload, identify the problematic activity/queries and address those directly. This could be a long-running transaction, issues with mirroring/replication/availability group partners, backup failures, etc. Finally, once that issue is addressed, if the log file is much larger than it needs to be and that is causing drive space issues, you can shrink the log to an appropriate size. Once the offending behavior is addressed, you should be able to do this without breaking the backup chain or setting the DB to single-user.

    Cheers!

  • Jacob Wilkins - Monday, May 22, 2017 8:16 AM

    Just to pile on, in general, you should figure out why the log is growing before doing anything else. Also, the mere fact that the log is growing is not necessarily a problem.

    Before I forget, I'd highly recommend giving the book at https://www.simple-talk.com/books/sql-books/sql-server-transaction-log-management-by-tony-davis-and-gail-shaw/ a read. That will give you a solid basis for understanding these sorts of situations.

    Back to the actual issue.

    If the growth is just from normal activity, then you may just need to size your files and disks appropriately for that activity. There's no benefit to generating all the IO to shrink the log file just to have it grow out again (more IO, transaction delays) for that activity again.

    If the database is using the full recovery model, depending on the activity pattern you might be able to mitigate the growth with more frequent log backups (if growth is primarily from large, single transactions, then this won't help so much).

    If the activity is unusual, then you'll want to address that.

    As to those specific methods of shrinking the log file, they each have  some pretty undesirable side effects.

    Switching the DB to single user might help by stopping current activity on the log, but you're basically killing all existing connections (how serious that is depends on business needs around that instance).

    It would be better to size the log appropriately and/or change log backup schedule appropriately (if normal activity is the cause), or directly address the unusual behavior causing growth (if it's an atypical workload).

    The other two will affect your ability to do point-in-time restores for databases using the full recovery model.

    Option 2 (backup to NUL) is the more insidious one, because it'll then let you continue to take log backups that you won't be able to use (because some of the log was thrown away to NUL).

    Option 3 will also throw away a portion of your log, but it will at least not let you take a log backup until you've taken a new full/diff, which you could then use in subsequent restores.

    In short, you need to figure out the following:

    1) Is the growth actually a problem? It might not be. If it is, you need to figure out what's causing it.

    2) If the growth is a problem and is caused by your normal workload, then you might just need to size the log file and drive appropriately, or you might be able to mitigate the growth with more frequent log backups, depending on exactly what is driving the growth.

    3) If the growth is a problem and is caused by an atypical workload, identify the problematic activity/queries and address those directly. This could be a long-running transaction, issues with mirroring/replication/availability group partners, backup failures, etc. Finally, once that issue is addressed, if the log file is much larger than it needs to be and that is causing drive space issues, you can shrink the log to an appropriate size. Once the offending behavior is addressed, you should be able to do this without breaking the backup chain or setting the DB to single-user.

    Cheers!

    Thank you very Much Jacob for your reply
    If I understand :
    Option 1 : Possibility for Point In Time Recovery (keep transaction) but user can't access database when I do operation (single user)
    Option 2 : Affect Point In Time Recovery because not possible to use Log Backup if necessary (log file = 0 kb) but user have already access to database. 
    Option 3 : Point In Time Recovery not possible at least, I do a Backup Full after operation but user have already access to database.

    I would precise that Transaction take only 0.5 % of Log File. It explain why I want to reduce Log File.
    Database is in Full Recovery and Backup Log are done each hour.

    So Option 1 is good if I can stop Production and what is the best option if I can't stop production ?

    Thank you very much
    Anthony

  • Didn't we suggest on another forum that you increaase the frequency of the log backups?   Did you do that?

    ------------------------------------------------------------------------------------------------Standing in the gap between Consultant and ContractorKevin3NFDallasDBAs.com/BlogWhy is my SQL Log File HUGE?!?![/url]The future of the DBA role...[/url]SQL Security Model in Plain English[/url]

  • Kevin3NF - Monday, May 22, 2017 12:51 PM

    Didn't we suggest on another forum that you increaase the frequency of the log backups?   Did you do that?

    I have discuss with project Manager who say me that it's not nessesary for him to increase the frequency of the log backups (yet each hour)
    Anthony

  • contact 14920 - Monday, May 22, 2017 11:45 AM

    Jacob Wilkins - Monday, May 22, 2017 8:16 AM

    Just to pile on, in general, you should figure out why the log is growing before doing anything else. Also, the mere fact that the log is growing is not necessarily a problem.

    Before I forget, I'd highly recommend giving the book at https://www.simple-talk.com/books/sql-books/sql-server-transaction-log-management-by-tony-davis-and-gail-shaw/ a read. That will give you a solid basis for understanding these sorts of situations.

    Back to the actual issue.

    If the growth is just from normal activity, then you may just need to size your files and disks appropriately for that activity. There's no benefit to generating all the IO to shrink the log file just to have it grow out again (more IO, transaction delays) for that activity again.

    If the database is using the full recovery model, depending on the activity pattern you might be able to mitigate the growth with more frequent log backups (if growth is primarily from large, single transactions, then this won't help so much).

    If the activity is unusual, then you'll want to address that.

    As to those specific methods of shrinking the log file, they each have  some pretty undesirable side effects.

    Switching the DB to single user might help by stopping current activity on the log, but you're basically killing all existing connections (how serious that is depends on business needs around that instance).

    It would be better to size the log appropriately and/or change log backup schedule appropriately (if normal activity is the cause), or directly address the unusual behavior causing growth (if it's an atypical workload).

    The other two will affect your ability to do point-in-time restores for databases using the full recovery model.

    Option 2 (backup to NUL) is the more insidious one, because it'll then let you continue to take log backups that you won't be able to use (because some of the log was thrown away to NUL).

    Option 3 will also throw away a portion of your log, but it will at least not let you take a log backup until you've taken a new full/diff, which you could then use in subsequent restores.

    In short, you need to figure out the following:

    1) Is the growth actually a problem? It might not be. If it is, you need to figure out what's causing it.

    2) If the growth is a problem and is caused by your normal workload, then you might just need to size the log file and drive appropriately, or you might be able to mitigate the growth with more frequent log backups, depending on exactly what is driving the growth.

    3) If the growth is a problem and is caused by an atypical workload, identify the problematic activity/queries and address those directly. This could be a long-running transaction, issues with mirroring/replication/availability group partners, backup failures, etc. Finally, once that issue is addressed, if the log file is much larger than it needs to be and that is causing drive space issues, you can shrink the log to an appropriate size. Once the offending behavior is addressed, you should be able to do this without breaking the backup chain or setting the DB to single-user.

    Cheers!

    Thank you very Much Jacob for your reply
    If I understand :
    Option 1 : Possibility for Point In Time Recovery (keep transaction) but user can't access database when I do operation (single user)
    Option 2 : Affect Point In Time Recovery because not possible to use Log Backup if necessary (log file = 0 kb) but user have already access to database. 
    Option 3 : Point In Time Recovery not possible at least, I do a Backup Full after operation but user have already access to database.

    I would precise that Transaction take only 0.5 % of Log File. It explain why I want to reduce Log File.
    Database is in Full Recovery and Backup Log are done each hour.

    So Option 1 is good if I can stop Production and what is the best option if I can't stop production ?

    Thank you very much
    Anthony

    1.Whats your total database size especially mdf? 
    2.How critical is your database?

    Never Stop a production unless otherwise its extremely important to do so.

    As experts suggested its not good practice to shrink file but again you know your environment and your need.

    If the mdf is not very huge , what you can do is take multiple log backups ,shrink the log file. Take a full backup .

    Change backup strategy to include a differential backup in between and increase the log file frequency and then delete the log files which are taken before a differential depending on your retention period.
    Again if its production nothing wrong in having frequent log backup to keep your log file in check , you can request for more space in backup drive afterall its production right.

  • I still don't understand why you want to shrink the log file. From what I've read the transaction uses 0.5% of the log so where is the problem? Is this a transaction that is a one off or does it run repeatedly? If it's a one off you are not going to free up much space and if if runs repeatedly then the log will grow again so just leave the log file alone.

    Apologies if I have misunderstood.

    Thanks

  • contact 14920 - Monday, May 22, 2017 11:27 PM

    Kevin3NF - Monday, May 22, 2017 12:51 PM

    Didn't we suggest on another forum that you increaase the frequency of the log backups?   Did you do that?

    I have discuss with project Manager who say me that it's not nessesary for him to increase the frequency of the log backups (yet each hour)
    Anthony

    Right.  You need to use your soft skills now and stand up to this person.  You're the DBA.  He has you on his project for your technical skills.  You get to make the final decision on issues like this; he doesn't.

    John

  • VastSQL - Tuesday, May 23, 2017 12:52 AM

    contact 14920 - Monday, May 22, 2017 11:45 AM

    Jacob Wilkins - Monday, May 22, 2017 8:16 AM

    Just to pile on, in general, you should figure out why the log is growing before doing anything else. Also, the mere fact that the log is growing is not necessarily a problem.

    Before I forget, I'd highly recommend giving the book at https://www.simple-talk.com/books/sql-books/sql-server-transaction-log-management-by-tony-davis-and-gail-shaw/ a read. That will give you a solid basis for understanding these sorts of situations.

    Back to the actual issue.

    If the growth is just from normal activity, then you may just need to size your files and disks appropriately for that activity. There's no benefit to generating all the IO to shrink the log file just to have it grow out again (more IO, transaction delays) for that activity again.

    If the database is using the full recovery model, depending on the activity pattern you might be able to mitigate the growth with more frequent log backups (if growth is primarily from large, single transactions, then this won't help so much).

    If the activity is unusual, then you'll want to address that.

    As to those specific methods of shrinking the log file, they each have  some pretty undesirable side effects.

    Switching the DB to single user might help by stopping current activity on the log, but you're basically killing all existing connections (how serious that is depends on business needs around that instance).

    It would be better to size the log appropriately and/or change log backup schedule appropriately (if normal activity is the cause), or directly address the unusual behavior causing growth (if it's an atypical workload).

    The other two will affect your ability to do point-in-time restores for databases using the full recovery model.

    Option 2 (backup to NUL) is the more insidious one, because it'll then let you continue to take log backups that you won't be able to use (because some of the log was thrown away to NUL).

    Option 3 will also throw away a portion of your log, but it will at least not let you take a log backup until you've taken a new full/diff, which you could then use in subsequent restores.

    In short, you need to figure out the following:

    1) Is the growth actually a problem? It might not be. If it is, you need to figure out what's causing it.

    2) If the growth is a problem and is caused by your normal workload, then you might just need to size the log file and drive appropriately, or you might be able to mitigate the growth with more frequent log backups, depending on exactly what is driving the growth.

    3) If the growth is a problem and is caused by an atypical workload, identify the problematic activity/queries and address those directly. This could be a long-running transaction, issues with mirroring/replication/availability group partners, backup failures, etc. Finally, once that issue is addressed, if the log file is much larger than it needs to be and that is causing drive space issues, you can shrink the log to an appropriate size. Once the offending behavior is addressed, you should be able to do this without breaking the backup chain or setting the DB to single-user.

    Cheers!

    Thank you very Much Jacob for your reply
    If I understand :
    Option 1 : Possibility for Point In Time Recovery (keep transaction) but user can't access database when I do operation (single user)
    Option 2 : Affect Point In Time Recovery because not possible to use Log Backup if necessary (log file = 0 kb) but user have already access to database. 
    Option 3 : Point In Time Recovery not possible at least, I do a Backup Full after operation but user have already access to database.

    I would precise that Transaction take only 0.5 % of Log File. It explain why I want to reduce Log File.
    Database is in Full Recovery and Backup Log are done each hour.

    So Option 1 is good if I can stop Production and what is the best option if I can't stop production ?

    Thank you very much
    Anthony

    1.Whats your total database size especially mdf? 
    2.How critical is your database?

    Never Stop a production unless otherwise its extremely important to do so.

    As experts suggested its not good practice to shrink file but again you know your environment and your need.

    If the mdf is not very huge , what you can do is take multiple log backups ,shrink the log file. Take a full backup .

    Change backup strategy to include a differential backup in between and increase the log file frequency and then delete the log files which are taken before a differential depending on your retention period.
    Again if its production nothing wrong in having frequent log backup to keep your log file in check , you can request for more space in backup drive afterall its production right.

    Hi,

    It's a little database : 
    one file *.mdf : 8 Gb
    one file *.ldf : 15 Gb
    Critical : yes because health database so not recommanded to stop production : you're right
    So if I would shrink file, best method is method 3 ?
    I have understand that it's not a good practice to shrink log file but 15Gb for a file who has only 0.5% used by transaction....
    Thank you
    anthony

  • What do you get if you run this?

    SELECT MAX(backup_size)
    FROM msdb..backupset
    WHERE database_name = 'toto'
    AND backup_finish_date > CURRENT_TIMESTAMP - 14
    AND type = 'L'

    John

  • contact 14920 - Tuesday, May 23, 2017 1:49 AM

    Hi,

    It's a little database : 
    one file *.mdf : 8 Gb
    one file *.ldf : 15 Gb
    Critical : yes because health database so not recommanded to stop production : you're right
    So if I would shrink file, best method is method 3 ?
    I have understand that it's not a good practice to shrink log file but 15Gb for a file who has only 0.5% used by transaction....
    Thank you
    anthony

    If the log file is growing, then it needs to be that space, you can either take more frequent log backups or accept that the log needs to be larger, size if for the transactions and leave it alone.

    If the file is not growing, then it may be acceptable to shrink it. You'll need to first identify how large it needs to be (run DBCC SQLPERF(LOGSPACE) just before each log backup to get the max percentage that the log actually uses of the file.
    If you do decide to shrink it (AFTER such research, not as a knee-jerk reaction), you don't need simple recovery or single user mode. It's an online operation, just do it at a quiet time.

    Did you get the book mentioned earlier in this thread?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This was removed by the editor as SPAM

  • John Mitchell-245523 - Tuesday, May 23, 2017 1:59 AM

    What do you get if you run this?

    SELECT MAX(backup_size)
    FROM msdb..backupset
    WHERE database_name = 'toto'
    AND backup_finish_date > CURRENT_TIMESTAMP - 14
    AND type = 'L'

    John

    Result : 7345895424

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply