October 2, 2015 at 11:12 am
I've been looking for techniques to shrink a logfile (a database was set to FULL for quite awhile without a log backup), and so I was going to get the logfile back to a manageable size. It's current 5G for a 200M database, and SSMS is showing that the logfile is 99% unused.
So I've seen a couple of examples where people suggest converting to a SIMPLE recovery model, then performing the log shrinkage, and then switching back to FULL, taking a backup, and then making sure that log backups are occurring. I tried shrinking the log on a test database set to FULL, and it worked, so it's not a technical issue to do so.
So why are some people suggesting the temporary switch to SIMPLE?
Thanks,
--=Chuck
October 2, 2015 at 11:50 am
Going to SIMPLE recovery model first can sometimes make it easier to shrink the transaction log as small as possible. Going to SIMPLE is never my first choice... I try log backups and shrinking first.
Have you implemented log backups? What are your recovery point and recovery time objectives for this database? It may be that a daily full backup without log backups is adequate, in which case it would be just fine to go to the SIMPLE recovery model.
As you alluded to, going to simple will break the log backups. If you have started them, it is imperative that you immediately do a full backup when you have switched back to the full recovery model... and if you're not in the simple recovery model, it is imperative that you do regular transaction log backups. Not only for point-in-time recovery, but also to keep the transaction log from growing out-of-control like it was.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
October 2, 2015 at 11:56 am
because the log is full and so will not shrink, therefore, if you do not need that data you can set recovery to simple so a checkpoint is run and the log will be emptied, thus allowing the log to be shrunk, or you backup the log, and then shrink. Of course the log backup will be 5GB (unless you have backup compression on), so you need the space to do the backup.
Shrink the log to the minimum so you minimise VLFs, and then immediately grow it to the size you think it needs to be.
---------------------------------------------------------------------
October 2, 2015 at 11:58 am
This particular database can endure no more than an hour's worth of data loss. So the prior employee has hourly differentials running with a nightly backup, with the database running in FULL recovery mode, with no log backups. I was going to shrink the logfile, start hourly log backups, and then stop the hourly differentials.
--=Chuck
October 2, 2015 at 12:06 pm
chuck.forbes (10/2/2015)
This particular database can endure no more than an hour's worth of data loss. So the prior employee has hourly differentials running with a nightly backup, with the database running in FULL recovery mode, with no log backups. I was going to shrink the logfile, start hourly log backups, and then stop the hourly differentials.--=Chuck
then in the case I would just start taking log backups, it may need more than one to move the active part of the log away from the end of the file. An oversized log in itself is not a problem unless you have a space shortage, but to many VLFs is, so base your shrink decision on that
---------------------------------------------------------------------
October 2, 2015 at 12:07 pm
George,
You're right, the file isn't 99% unused, it's 99% used. I interpreted that backwards. I can see the need for the checkpoint now.
--=Chuck
October 2, 2015 at 5:20 pm
chuck.forbes (10/2/2015)
You're right, the file isn't 99% unused, it's 99% used. I interpreted that backwards. I can see the need for the checkpoint now.
Checkpoint won't do anything, you need to take a log backup before you can shrink the file.
Don't shrink it to zero either.
Oh, and
So why are some people suggesting the temporary switch to SIMPLE?
Usually because the people suggesting it don't really know what they're talking about.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 2, 2015 at 5:58 pm
GilaMonster (10/2/2015)
chuck.forbes (10/2/2015)
So why are some people suggesting the temporary switch to SIMPLE?
Usually because the people suggesting it don't really know what they're talking about.
Hahaha - so true!
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
October 5, 2015 at 9:45 am
Thanks for all of the assistance. I performed the maintenance over the weekend (switching some to SIMPLE then back to FULL where it made sense) and everything went great. I also appreciate the suggestion to shrink the transaction log to the smallest size possible, and then to immediately grow it to an expected size, in order to try & reduce the possibility of fragmentation. I would not have thought of that, nor had I come across it in my research.
Just an FYI, having come from Oracle into SQL Server, I am really enjoying the tool and toolset.
--=Chuck
October 5, 2015 at 10:12 am
just to clarify that a bit (should have mentioned earlier), if the required size is very large, go down to the minimum then increase in 8000MB (not 8GB) chunks so the vlfs are not too large
http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
---------------------------------------------------------------------
October 5, 2015 at 10:13 am
GilaMonster (10/2/2015)
chuck.forbes (10/2/2015)
You're right, the file isn't 99% unused, it's 99% used. I interpreted that backwards. I can see the need for the checkpoint now.Checkpoint won't do anything, you need to take a log backup before you can shrink the file.
Don't shrink it to zero either.
Oh, and
So why are some people suggesting the temporary switch to SIMPLE?
Usually because the people suggesting it don't really know what they're talking about.
Just curious as to why you wouldn't shrink it to zero and regrow it properly.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 5, 2015 at 10:17 am
Jeff Moden (10/5/2015)
GilaMonster (10/2/2015)
chuck.forbes (10/2/2015)
You're right, the file isn't 99% unused, it's 99% used. I interpreted that backwards. I can see the need for the checkpoint now.Checkpoint won't do anything, you need to take a log backup before you can shrink the file.
Don't shrink it to zero either.
Oh, and
So why are some people suggesting the temporary switch to SIMPLE?
Usually because the people suggesting it don't really know what they're talking about.
Just curious as to why you wouldn't shrink it to zero and regrow it properly.
see previous posts?
---------------------------------------------------------------------
October 5, 2015 at 3:18 pm
Jeff Moden (10/5/2015)
Just curious as to why you wouldn't shrink it to zero and regrow it properly.
Huh? I said 'don't shrink it to zero', as in don't DBCC SHRINKFILE (2,0), which is what many bad shrink scripts do (simple recovery, shrink to zero, full recovery and let the log grow slowly back to the size it needs to be)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply