February 29, 2012 at 10:00 am
Kwisatz78 (2/29/2012)
How is that? Have I just explicitly stated what it was implicitly doing? None of them are nested
I must be missing something then. I was pretty sure I saw an extra begin transaction/commit pair around all the code you rewrote. Sorry.
February 29, 2012 at 10:02 am
Ahh no worries at least its not me going mad, its been a long day 😛
February 29, 2012 at 10:06 am
Kwisatz78 (2/29/2012)
Ahh no worries at least its not me going mad, its been a long day 😛
Please read the article I suggested. It may be for deletes but the concept can be used for updates as well.
February 29, 2012 at 10:08 am
Will do
February 29, 2012 at 3:40 pm
Even in simple recovery model, you cannot guarantee that the log will be reused by having small transactions, as it depends on other factors as well. If you have other long running transactions running simultaneously, these may prevent VLFs from being reused. VLFs are being marked as available whenever a checkpoint occurs and there is no open transactions having data stored in that particular VLF.
March 1, 2012 at 1:28 am
Matt Miller (#4) (2/29/2012)
Correct, you do log everything, but the truncate happens immediately after commit in simple mode so small batches roughly means no log growth in that scenario.
Actually, the truncate happens at the next checkpoint. But I take your point - the log is generally truncated much more often in simple mode than in full.
So the tradeoff ultimately comes down to:
- leave the DB in Full logging (recommended), which means your log file might grow by quite a bit (even with multiple batches), or
- keep your log files small by switching recovery modes and committing in small batches, which then screws up your log chain.
To me #1 is manageable (you could run lots of log backups during the updates to keep log growth under control), and #2 is an unacceptable risk, but log space does seem to come up over and over again as a big concern.
I wouldn't disagree with any of that (except maybe what constitutes an unacceptable risk is a business decision). My point was only that the original poster stated everything was being done in one transaction, and under those circumstances it doesn't matter what recovery model you use.
John
March 1, 2012 at 1:30 am
Kwisatz78 (2/29/2012)
Here is what I have altered it to:
But your original post was about splitting the update into batches. Have you tried doing that?
John
March 1, 2012 at 2:33 am
No I just broke down the large transaction into smaller ones, ran them individually and kept an eye on log space and tempdb space, it was all fine in the end.
Thanks for all the input.
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply