May 8, 2012 at 6:57 am
Please note that Eager Writes are just for minimally logged operations in bulk-logged or simple recovery, and they just write the minimally logged pages to the data file.
Understood. Mulling it over, this seems to make sense to me as a quick and dirty mental model for bulk logged operations:
We always need "before" and "after" images in user database to support recovery. For minimally logged operations in bulk-logged or simple recovery, the "after" image is only in the data file, hence the absolute need for Eager Write. The "before" image is assumed to consist of empty pages (allocation updates aside), hence the restriction on what can and cannot be minimally logged. So rollforward is based on the knowledge that the data is in the data file, and rollback is only possible because we know we are rolling back to empty pages.
May 8, 2012 at 1:38 pm
great article!
May 8, 2012 at 8:00 pm
Thanks Gail, great article and explanations. Clear and concise. Appreciate you clearing up that question for me.
May 13, 2012 at 5:30 am
Hi Gail,
Thanks for the great article..
One question:
One side effect of this requirement that both log records and modified data pages are written to disk before the transaction commits is that it may actually result in the minimally logged operation being slower than a regular transaction if the data file is not able to handle the large volume of writes
I fail to fully understand why minimally logged operations can be slower . With the eager write process, should it not ease the burden on the transaction log by writing the dirty minimally logged records only to the data file? By splitting the writes as opposed to the fully logged model should it not be faster, especially if the log and data are on separate drives? Is flushing to the data files slower than writing to the log if both are sequential? I can understand it if the writes are random, but are minimal logged operations not all sequential in nature when writing to the data files?
Thanks for this..
May 13, 2012 at 8:27 am
Hi Very nice Article.
Since i am a beginner, i had a question in my mind:
When we switch the database recovery mode, from Full recivery to bulk- logged or vice versa, does some changes happen to the existing log file of the database too??? (so as to align with the database recovery model).
May 14, 2012 at 4:58 am
Great reference article.
M&M
May 14, 2012 at 11:00 am
stevro (5/13/2012)
I fail to fully understand why minimally logged operations can be slower . With the eager write process, should it not ease the burden on the transaction log by writing the dirty minimally logged records only to the data file? By splitting the writes as opposed to the fully logged model should it not be faster, especially if the log and data are on separate drives? Is flushing to the data files slower than writing to the log if both are sequential? I can understand it if the writes are random, but are minimal logged operations not all sequential in nature when writing to the data files?
With a fully logged operation only the log records have to be written to disk before the transaction is complete, the data pages can follow later.
If you've got a database where the data file is optimised for reading (say raid 5 and read caches) and a transaction log optimised for writing (say raid 10 and write caches), then it is possible that writing X GB of data to the data file will be slower than writing X GB of data to the log file would have been, and the transaction commit will have to wait for the write to the data to the data file to finish where normally it only has to wait for the write to the log.
I didn't say it will be slower, but it can be, maybe not usually, but it is a possibility.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 14, 2012 at 12:30 pm
deepak_19888 (5/13/2012)
When we switch the database recovery mode, from Full recivery to bulk- logged or vice versa, does some changes happen to the existing log file of the database too??? (so as to align with the database recovery model).
The change is logged, just like any other change to the database, but there's nothing that you would need to do other than setting the recovery model option.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 15, 2012 at 2:32 am
If you've got a database where the data file is optimised for reading (say raid 5 and read caches) and a transaction log optimised for writing (say raid 10 and write caches), then it is possible that writing X GB of data to the data file will be slower than writing X GB of data to the log file would have been
Thanks Gail. This make sense.
May 24, 2012 at 12:05 pm
Truly briefcase item for me.
The complex topic in such a simple way( at least for me)
Thanks Gail
Should we live our life in bulk logged or full recovery mode?:hehe:
----------
Ashish
May 25, 2012 at 5:11 am
crazy4sql (5/24/2012)
Should we live our life in bulk logged or full recovery mode?:hehe:
From the article:
Guidelines for Bulk-logged Recovery
The main guideline for bulk-logged recovery is: ‘as short a time as possible’.
Due to the implications for point-in-time restores and restoring to point of failure, if a database is switched to bulk-logged recovery in order to minimally log some operations, it should stay in bulk-logged recovery for the shortest time possible.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 14, 2013 at 8:55 am
Thank you!
I was redoing everything you have done in this example but when I restore the log with CONTINUE_AFTER_ERROR I get all my data correctly. I have done this multiple times and I haven't got the message 'Attempt to fetch logical page in database failed'
I cannot understand why this is happening because when I did the log backup(tail) I already have deleted the mdf file , so there isn't any way SQL Server could get the data from.
Can you please help me?
Thank you
January 14, 2013 at 8:59 am
Without all your code, I can't say. Maybe the operations weren't minimally logged (existing table, full recovery or any of the other reasons for not minimally logging)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 14, 2013 at 11:09 am
Thank you for your reply. As soon as i get to my office I will get all the code I have used and i will post here.
GilaMonster (1/14/2013)
Without all your code, I can't say. Maybe the operations weren't minimally logged (existing table, full recovery or any of the other reasons for not minimally logging)
January 14, 2013 at 12:23 pm
Just go over it a couple times, make sure you are getting minimal logging. I ran that particular test a number of times, as did a friend and, while we got a variety of different errors, we did not get the table back intact after any of the tests.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 16 through 30 (of 31 total)
You must be logged in to reply to this topic. Login to reply