May 18, 2005 at 1:35 pm
If I set a database recovery model is set to simple, how is the transaction log size kept under control and prevented from growing and growing? I know if you have a FULL recovery model the transaction log file is controlled with tlog backups occuring regularly
Can someone give me a heads up on this?
Juanita
May 18, 2005 at 1:52 pm
In Simple Recovery model, as the transactions are written to the database, they are deleted or "truncated" from the transaction log. This is done each time a checkpoint is issued, which is done on a top secret schedule that only Microsoft knows about...ok, not really, but just say that its not necessarily when you press enter to submit your update/delete/insert.
Steve
May 18, 2005 at 1:56 pm
HI. so then the tlog should not get to be this enourmous size?? Under the file properties for the tlog, should the option of AUTOMATICALLY grow file not be checked off or does it not matter? I guess that is where I'm confused also, does it matter what is set for the file properties for a tlog when using the SIMPLE recovery model.
May 18, 2005 at 2:23 pm
You might still want to check the option for automatically growth. Say if you are updating a table with 1 million rows, as steve mentioned, the transaction does get written to the log file but would be truncated after a checkpoint is issued. You will have to have enough space equivalent to your biggest transaction.
May 18, 2005 at 2:34 pm
Okay, I got it.. so your tlog is kept under control by checkpoints issued by sql server.
Thanks everyone !
Juanita
May 18, 2005 at 2:58 pm
Well, almost. In case of a single long logged transaction involving many rows the log may fill even with Simple Recovery Model.
I had to help out with a case this week when someone had a database with Simple recovery model where he had to delete a lot of rows and re-import a lot of rows. Log was set to autogrow without a limit. The job returned the error that transaction log was full. I checked his drive and it (the drive) was full because the log grown a lot. It happened during Delete operation. I helped to truncate and shrink the log and recommended to use non-logged TRUNCATE TABLE instead of logged DELETE for transactions with many rows. And if he had to Delete with WHERE I recommended to limit the Rowcount to a reasonable number as was described on this site several times.
Yelena
Regards,Yelena Varsha
May 18, 2005 at 3:27 pm
Good point !! Thanks !!
May 18, 2005 at 3:27 pm
Juanita,
In my opinion, Microsoft messed up when they named it the "transaction log", because that isn't really what it is. Its a transaction facilitator. When you issue an update, for example, the update doesn't go to the database file, it goes to the transaction log, where it waits until a checkpoint is issued. The checkpoint may be issued when the transaction completes, when you issue the CHECKPOINT command in Query Analyzer, or at some arbitrary time that SQL decides on (what I referred to previously as a "top secret schedule"). When the checkpoint is issued, the update is written to the database file, and then truncated from the transaction log. As Yelena and I have said, in Simple Recovery mode (formerly known as "Truncate Log on Checkpoint"), the log is truncated when a checkpoint is issued. In the Full Recovery model, the log is only truncated when an explicit TRUNCATE is issued, or a transaction log backup occurs, which issues a TRUNCATE.
Also, as Yelena described, a checkpoint will NOT be issued in the middle of a transaction. In the scenario she described, the mass delete was a single transaction, causing the log to grow out of control. By breaking the delete into multiple, smaller transactions using Rowcount, checkpoints were allowed to occur, truncating the log during the delete process.
Steve
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply