July 21, 2012 at 1:34 pm
Thanks Gail. Through the DMV, I think found the offending query. Its deleting 4 mil rows from a table. But why it would fill up the log while db in simple reocovery model? Even when I do select, it fills up the log. Any idea/ suggestion?
July 21, 2012 at 2:27 pm
A select won't fill the log. Selects aren't logged. Data modifications are logged.
As for the delete, the entire of that delete transaction has to be logged. Calculate how much space those 4 million rows take and that's the minimum that the delete will need in the log (double for safety, the log also reserves space for rollback)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 12:07 am
it looks like by now you have already tried most of the stuff.
you did mention that the log file growth is unrestricted and also that you have ample space on the drive but from the perspective of resolving the problem on a immediate basis can you consider adding another log file on the same or another drive if the issue may have reached show-stopper proportions.
I mean it may not hurt to throw the kitchen sink in this case. 🙂
July 22, 2012 at 2:18 am
Actually i removed all the records and that helped. My VLSF are high and thats hurting the performance. I did shrinfile as well.....What should I do to improve the performance? What should be the ideal settings (size, maxsize, growth) for 20 gb db?
July 22, 2012 at 2:49 am
ok i think there is no rule of the thumb and it is more subjective to how the applications works. in some cases you may also have to execute ad-hoc DML statements to purge old data so that has to be taken into consideration as well.
The ideal approach would be to pre-size your log file (i would say not more than 50 % of your data file size).
and then setup alerts which will notify you when the log file usage crosses a certain threshold like say 70%
Also work closely with the application team and request them to keep the transactions short and perform more frequent commits to prevent exponential log growth.
July 22, 2012 at 7:52 am
Should we be concerned about number of VLFS? There are about 735 in there? Would adding another Log file help?
July 22, 2012 at 8:01 am
SQL_Surfer (7/22/2012)
Should we be concerned about number of VLFS? There are about 735 in there? Would adding another Log file help?
No and no.
Worry when you have thousands or tens of thousands of VLFs
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 8:05 am
Gail thank you so much for your help on this thread...You rock!!!!! You are a life savior.
July 22, 2012 at 8:19 am
One thing though I didn't understand was if transactions gets logged in Simple Recovery model as well then what is the difference between Simple Vs Full recovery model?
July 22, 2012 at 8:54 am
Logging mechanism is the same with both Full and Simple recovery model , it is a bit different with Bulk-logged recovery model for some of the operations like index creation \ rebuilds.
The only thing different with Simple recovery model is that a commit operation flushes \truncates the committed transactions from the log file, that does not happen in case of Full recovery model.
July 22, 2012 at 9:07 am
SQL_Surfer (7/22/2012)
One thing though I didn't understand was if transactions gets logged in Simple Recovery model as well then what is the difference between Simple Vs Full recovery model?
All operations are logged in all recovery models. The only differences between the recovery models is how much gets logged for certain operations (like bulk inserts or index rebuilds) and when the log is marked as reusable.
Please read through this - Managing Transaction Logs[/url] and this: http://www.sqlservercentral.com/articles/Transaction+Log/72488/
It's covered in a fair bit of detail in those.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 9:09 am
Vikrant S Patil (7/22/2012)
Logging mechanism is the same with both Full and Simple recovery model, it is a bit different with Bulk-logged recovery model for some of the operations like index creation \ rebuilds.
Simple is the same as bulk-logged regarding how much gets logged. Bulk-logged is the same as full regarding when the log gets marked reusable.
The only thing different with Simple recovery model is that a commit operation flushes \truncates the committed transactions from the log file, that does not happen in case of Full recovery model.
No it does not. Commit does not truncate the log in any recovery model. What truncates (marks as reusable) the log is a checkpoint in simple recovery and a log backup in full or bulk-logged.
Have a read through the articles I linked above.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 9:43 am
agree, and that is why we had the truncate log on checkpoint option in SQL 2000.
From my experience though there is usually not a need to perform a checkpoint explicitly to clear up the log after data is committed in a Simple recovery model
A commit operation usually truncates the log almost immediately (i think) because in most SQL server instances ( if not all) a checkpoint operation usually runs far too often for you to find a delay between a commit transaction and a checkpoint process. But i guess we may see a delay in case of a unusually large transaction.Thanks for the explanation,i stand corrected.
I have to admit I was unaware of the fact that Simple is same as Bulk-Logged, I assumed it was same as Full where all transactions are fully logged. Thank you for clearing up that misconception as well.
July 22, 2012 at 10:07 am
Vikrant S Patil (7/22/2012)
agree, and that is why we had the truncate log on checkpoint option in SQL 2000.
That option is a left over from SQL 7 or earlier, not needed, shouldn't be used. Replacement is simple recovery model.
From my experience though there is usually not a need to perform a checkpoint explicitly to clear up the log after data is committed in a Simple recovery model
No you don't, the automatic checkpoints run regularly and they truncate the log just as well as a manual checkpoint. There should be no need of manual checkpoints in 99% of scenarios.
A commit operation usually truncates the log almost immediately (i think)
A commit does not truncate the log. Ever.
Checkpoint is what truncates the log when it runs (in simple recovery model). Nothing to do with the size of the transaction, just when checkpoint runs.
The only effect of an active transaction is that log records that are part of an open transaction cannot be truncated, nor can any later log records.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 7:40 pm
I've read this thread twice now and I'm not seeing it (might be due to a lack of caffeine). What was the final solution to this problem?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 31 through 45 (of 48 total)
You must be logged in to reply to this topic. Login to reply