June 24, 2012 at 4:13 am
Hi all,
During defragmention of a DB's indexes last night its log blew up from 2 to 80gb in size, and although we have sufficient space, the db is the primary for log shipping and this problem has caused the secondary to drift out of synch by several hours, whereas shipping should occur every 15 mins).
All the while, the log still hasn't reduced.
Is there any way I can force the log to commit any uncommitted transactions as necessary and then flush out ? I have done a full backup of the datafile, to no effect. Would performing a tran log dump break the log shipping chain?
Thanks,
Jake
Edit - the DB recovery model is 'Bulk-logged'
June 24, 2012 at 5:38 am
The log will never reduce in size by itself
Please read through this - Managing Transaction Logs[/url]
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 24, 2012 at 6:25 am
Morning Gail,
Great article and I hit the log with a DBCC SHrinkfile, which did the trick (had to start with a large target space value and reduce down.
Is there a script with a loop I can use after each defrag? I have another 3-4 db's worth of defragging to do. Actually, could I shink the file DURING defrag, as our LAN doesn't log ship such huge files well, as I found out last night?!??!
Cheers
June 24, 2012 at 7:06 am
Don't shrink the log regularly...
In bulk-logged recovery you're better off rebuilding the indexes than reorging, as rebuild is a minimally logged operation. Otherwise increase the frequency of your log backups. The 'getting out of sync' was more likely caused by the large amount of logged operations (the rebuild) than the actual size of the log file.
In general you should leave the log at the size it needs to be for regular operations (an that includes rebuild/reorg). Check Kimberly Tripp's article on transaction log throughput to ensure it's not fragmented.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 24, 2012 at 8:08 am
GilaMonster (6/24/2012)
Don't shrink the log regularly...In bulk-logged recovery you're better off rebuilding the indexes than reorging, as rebuild is a minimally logged operation. Otherwise increase the frequency of your log backups. The 'getting out of sync' was more likely caused by the large amount of logged operations (the rebuild) than the actual size of the log file.
In general you should leave the log at the size it needs to be for regular operations (an that includes rebuild/reorg). Check Kimberly Tripp's article on transaction log throughput to ensure it's not fragmented.
I'll take a look.
Logfile shrinking won't be a regular occurrence, as we'll only need to do it after defragmentation (and again, that will only be done when ShowContig displays heavy fragmentation, possibly 4x annually at most.). My DB's are actually archives and thus neither particularly heavily nor frequently transacted. However I do want to head off future performance hassles by doing maintenance when things are quiet.
But again, what I would like is a script I can run next weekend which is integrated into the defrag script, so that once defrag is complete, a DBCC ShrinkFile statement is automatically begun, if the log has grown, and loops automatically cycling down in target size, with the minimum possible intervention by myself on a Saturday night.
🙂
June 24, 2012 at 8:38 am
Jake Shelton (6/24/2012)
But again, what I would like is a script I can run next weekend which is integrated into the defrag script, so that once defrag is complete, a DBCC ShrinkFile statement is automatically begun, if the log has grown, and loops automatically cycling down in target size, with the minimum possible intervention by myself on a Saturday night.
Do not regularly shrink your log (and what you're describing is regular).
In bulk-logged recovery you can either rebuild instead of reorg, as rebuild is a minimally logged operation, or you can increase the frequency of log backups during the reorg.
The size of the log file is not going to affect log shipping, only the size of the log backups does that (and the fragmentation of your log file, see Kimberly Tripp's blog post on transaction log throughput)
Also consider not rebuilding or reorganising everything, just tables with fragmentation above the recommended level
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 24, 2012 at 9:26 am
GilaMonster (6/24/2012)
Jake Shelton (6/24/2012)
But again, what I would like is a script I can run next weekend which is integrated into the defrag script, so that once defrag is complete, a DBCC ShrinkFile statement is automatically begun, if the log has grown, and loops automatically cycling down in target size, with the minimum possible intervention by myself on a Saturday night.Do not regularly shrink your log (and what you're describing is regular).
In bulk-logged recovery you can either rebuild instead of reorg, as rebuild is a minimally logged operation, or you can increase the frequency of log backups during the reorg.
The size of the log file is not going to affect log shipping, only the size of the log backups does that (and the fragmentation of your log file, see Kimberly Tripp's blog post on transaction log throughput)
Also consider not rebuilding or reorganising everything, just tables with fragmentation above the recommended level
4x a year may be (too) frequent, which is what I suspect you mean, but it's not "regular" unless the intervals remain constant after several repetitions. However, that's as far as I'll go with semantics on such a nice afternoon 🙂 )
I will look into rebuilds and reorgs, but I fear the former is an offline-only operation, which I would be hard pressed to get authorisation for. I used the 'E' example in BOL's DBCC SHOWCONTIG article to hit everything that was more than 30% fragmented. That's what I want to do on the other 3-4 DB's, but that'll wait until next w/end.
Appreciate you having helped today, Gail. 🙂
June 25, 2012 at 4:42 am
I'm pretty sure that doing a shrink immediately after an index defrag is utterly pointless, because it will fragment the indexes again--check the index fragmentation after you've done the shrink!
June 25, 2012 at 4:59 am
paul.knibbs (6/25/2012)
I'm pretty sure that doing a shrink immediately after an index defrag is utterly pointless, because it will fragment the indexes again--check the index fragmentation after you've done the shrink!
Only if shrinking the data file. Shrinking the log file, as the OP was planning doesn't fragment indexes, it can just lead to log fragmentation
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply