March 28, 2014 at 7:23 am
I am having some issues with this log file being filled every other day at the same time. I tried adding some extra space but it gets filled again. I am no expert but this is what I have gathered.
From the start of the people load on 6/26 at just before 10 PM, until the drive filling up at 10:35:33 PM, the following took place:
1. 43114 rows (of about 22 not-particularly-large fields each) were submitted to COL:LOAD:People, in 100-row batches (this took about 3 minutes)
2. 6229 rows in COL:LOAD:People were modified by an escalation
3. 3189 of the 6229 modified COL:LOAD:People records went on, through workflow, to create or modify a COL:CORE:People record.
Now, what confuses me is that after #1 is completed, there should be a fresh start with the transaction log, no? And if #1 completed without blowing out the log, Items #2 + #3 put together shouldn’t put even a fraction of the load onto the transaction log.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
March 28, 2014 at 7:44 am
What is the recovery model?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 28, 2014 at 7:44 am
Just so I understand your question. Your log file is filling up, meaning the % free space is decreasing at around the same time every other day? You're also asking that after these batches, does the t-log truncate?
Is your DB in full recovery mode? If you are, are you running T-Log backups? If so, how often?
March 28, 2014 at 8:01 am
Sure sounds like you're either not backing up transaction logs or not backing them up often enough. Read more about it here[/url].
It's also possible that you have a broken replication set up or a broken mirroring set up. Both of these can cause the logs to grow despite log backups.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 28, 2014 at 8:02 am
GilaMonster (3/28/2014)
What is the recovery model?
It was set to Full until this morning but one of our DBA put the DB to Simple recovery mode.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
March 28, 2014 at 8:06 am
Grant Fritchey (3/28/2014)
Sure sounds like you're either not backing up transaction logs or not backing them up often enough. Read more about it here[/url].It's also possible that you have a broken replication set up or a broken mirroring set up. Both of these can cause the logs to grow despite log backups.
I set up a maintenance plan to take a T-Log backup every hour and its been working just fine.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
March 28, 2014 at 8:10 am
If it's simple recovery, after a transaction has been fully committed the space used should be reused. If it's in full recovery, you need to run t-log backups for the log to truncate. Now that it's in simple recovery, you should be fine unless the log file is 1 MB.
March 28, 2014 at 8:17 am
JoshDBGuy (3/28/2014)
If it's simple recovery, after a transaction has been fully committed the space used should be reused. If it's in full recovery, you need to run t-log backups for the log to truncate. Now that it's in simple recovery, you should be fine unless the log file is 1 MB.
I am running t-log backup every hour, but this job fills up the log file in half an hour. I don't know if setting up t-log backup every 15 minutes should prevent it.
We did put the DB to Simple Recovery Mode but now we are risking data lose because we won't be able to perform point-in-time recovery.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
March 28, 2014 at 8:18 am
New Born DBA (3/28/2014)
JoshDBGuy (3/28/2014)
If it's simple recovery, after a transaction has been fully committed the space used should be reused. If it's in full recovery, you need to run t-log backups for the log to truncate. Now that it's in simple recovery, you should be fine unless the log file is 1 MB.I am running t-log backup every hour, but this job fills up the log file in half an hour. I don't know if setting up t-log backup every 15 minutes should prevent it.
We did put the DB to Simple Recovery Mode but now we are risking data lose because we won't be able to perform point-in-time recovery.
In environments with a lot of transactions I prefer to run my T-Log backup every 10 or 15 minutes. If you did run the log backup every 15 minutes and it takes about 30 minutes for the batches to complete, you should be fine as long as you tweak your log file size and autogrowth settings accordingly. What's the initial size of your log file?
March 28, 2014 at 8:22 am
JoshDBGuy (3/28/2014)
New Born DBA (3/28/2014)
JoshDBGuy (3/28/2014)
If it's simple recovery, after a transaction has been fully committed the space used should be reused. If it's in full recovery, you need to run t-log backups for the log to truncate. Now that it's in simple recovery, you should be fine unless the log file is 1 MB.I am running t-log backup every hour, but this job fills up the log file in half an hour. I don't know if setting up t-log backup every 15 minutes should prevent it.
We did put the DB to Simple Recovery Mode but now we are risking data lose because we won't be able to perform point-in-time recovery.
In environments with a lot of transactions I prefer to run my T-Log backup every 10 or 15 minutes. If you did run the log backup every 15 minutes and it takes about 30 minutes for the batches to complete, you should be fine as long as you tweak your log file size and autogrowth settings accordingly. What's the initial size of your log file?
Unless it's just a huge transaction. In which case, you either have to find a way to reduce the size of the transaction, or you have to have enough space in the log to support it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 28, 2014 at 8:22 am
One thing that I've found that'll prevent the Logs from truncating is a hung process. Run DBCC OPENTRAN to see if there are any processes active, because if one is hung it could prevent the logs before that point from truncating.
Also run DBCC LOGINFO; to show the status if your virtual logs. If you truncate your logs it'll only truncate the last virtual logs with status of zero (inactive). Any with 2 means they can't be overwritten for some reason. If you changed your database to Simple mode it should set the status to 0 after everything in the virtual log file has been committed, but if you have a hung transaction or a log that's waiting replication or copy to mirrored DB it won't be zero and you won't be able to truncate that.
Hope this helps.
Sam
March 28, 2014 at 8:24 am
See the attachment.
"He who learns for the sake of haughtiness, dies ignorant. He who learns only to talk, rather than to act, dies a hyprocite. He who learns for the mere sake of debating, dies irreligious. He who learns only to accumulate wealth, dies an atheist. And he who learns for the sake of action, dies a mystic."[/i]
March 28, 2014 at 8:34 am
New Born DBA (3/28/2014)
See the attachment.
Couple issues.
Your database settings aren't optimal, I would set the growth to something closer to 1000 MB. 1 MB will cause frequent DB autogrowths which can cause performance issues. If your database is increasing in size by quite a bit, you may want to expand the initial size of the DB as well.
Your Log setting looks a bit off. I would set the initial size to 1000 MB and set the growth to 1000 MB as well. That is, unless you have severe space constraints.
March 28, 2014 at 8:35 am
Grant Fritchey (3/28/2014)
Unless it's just a huge transaction. In which case, you either have to find a way to reduce the size of the transaction, or you have to have enough space in the log to support it.
Exactly. This is the catch.
If you are having a huge transaction, break it into smaller chunks. Otherwise your log will fill up.
--
SQLBuddy
March 28, 2014 at 8:36 am
Grant Fritchey (3/28/2014)
JoshDBGuy (3/28/2014)
New Born DBA (3/28/2014)
JoshDBGuy (3/28/2014)
If it's simple recovery, after a transaction has been fully committed the space used should be reused. If it's in full recovery, you need to run t-log backups for the log to truncate. Now that it's in simple recovery, you should be fine unless the log file is 1 MB.I am running t-log backup every hour, but this job fills up the log file in half an hour. I don't know if setting up t-log backup every 15 minutes should prevent it.
We did put the DB to Simple Recovery Mode but now we are risking data lose because we won't be able to perform point-in-time recovery.
In environments with a lot of transactions I prefer to run my T-Log backup every 10 or 15 minutes. If you did run the log backup every 15 minutes and it takes about 30 minutes for the batches to complete, you should be fine as long as you tweak your log file size and autogrowth settings accordingly. What's the initial size of your log file?
Unless it's just a huge transaction. In which case, you either have to find a way to reduce the size of the transaction, or you have to have enough space in the log to support it.
True.
Viewing 15 posts - 1 through 15 (of 52 total)
You must be logged in to reply to this topic. Login to reply