August 28, 2013 at 1:53 am
Never really got to grips with Transaction log size in Simple Recovery mode. I originally had a MERGE statement within a stored procedure. When this ran it looked at a table with approx 100 m records as source and the target was a table of the same size. When running this we found the transaction log file grew and we ran out of space.
The first solution
We split the stored procedure into two MERGE statements. The MERGE's do inserts and updates only. I split it into two in the hope that once the first statement completes then the second would reuse the space in the transaction log. So Log siize would increase when executing the first statement. When statement one completes and statement two starts the transaction log would reuse the space it had assigned to statement one.
This did not appear to work. The log kept growing. Now looked at the settings for the Log. It has Auttogrowth enabled and restricted growth to the default etting of incredibly large. So what went wrong?
Did the transaction log not reuse the available space that was ready for reuse because it was allowed to keep growing to a very very large size? Would it help if I restricted it to 40GB forcing it to reuse the space?
Did I need to explicitly BEGIN and COMMIT transactions for each statement.
Your help gratefully appreciated this is not the first time this has happened to me and I would really like to be able to manage transaction log files to a reasonable size when in Simple Recovery Mode
Our next attempt will be to use transactions around the MERGE statements and a smaller log max size.
Thanks
E
:w00t:
August 28, 2013 at 2:13 am
Take a read through this: http://www.sqlservercentral.com/articles/Administration/64582/
Did the transaction log not reuse the available space that was ready for reuse because it was allowed to keep growing to a very very large size? Would it help if I restricted it to 40GB forcing it to reuse the space?
No. No.
If you restricted it to 40GB then in the situation you describe the second merge would have failed with "The transaction log is full..."
Did I need to explicitly BEGIN and COMMIT transactions for each statement.
No.
Our next attempt will be to use transactions around the MERGE statements and a smaller log max size.
Don't do that. There's very seldom a good reason to have a non-default max size for the log.
In simple recovery, the log is made reusable when a checkpoint runs. Not when a transaction commits. Checkpoints run automatically on a regular basis, or you can run one manually.
The second merge couldn't reuse the log space that the first one used because a checkpoint hadn't run in the meantime.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 28, 2013 at 2:41 am
Thanks.
so thats all my plans out the window and look at Checkpoints. Would it be sensible to force checkpoints if they are not happening quickly enough?
I have had the same issue a few years back and I just started to shrink the log file manually but that was a different warehouse with different challenges.
Also, under the simple recovery model, an automatic checkpoint is also queued if the log becomes 70 percent full.
Under the simple recovery model, unless some factor is delaying log truncation, an automatic checkpoint truncates the unused section of the transaction log. In contrast, under the full and bulk-logged recovery models, once a log backup chain has been established, automatic checkpoints do not cause log truncation
Having looked at this it seems to depend on recovery interval on the Server, which in our case is set to 0.
Thanks will keep looking at this.
E
August 28, 2013 at 3:01 am
Ells (8/28/2013)
Thanks.so thats all my plans out the window and look at Checkpoints. Would it be sensible to force checkpoints if they are not happening quickly enough?
Usually no. If you're doing batch deletes (deleting millions of rows in chunks), you would often run a checkpoint between each delete.
I have had the same issue a few years back and I just started to shrink the log file manually but that was a different warehouse with different challenges.
Don't shrink the log. If it is getting to a specific size it needs to be that size for normal activity. Shrinking's a waste of time, it'll just regrow to that size.
Having looked at this it seems to depend on recovery interval on the Server, which in our case is set to 0.
0 is default, don't change it without a good reason and a good understanding of the effects.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply