January 23, 2010 at 9:02 pm
Hi,
I have set the database to SIMPLE recovery model and i have restricted to my logfile 6MB.
my table has 1 gb data. i purposefully restricted the logfile to6MB to checkbehavior of SIMPLE recoery model.
msdn says
Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space
I thought, whenever i say SIMPLE recovery model, when log file reaches 70-80% , then automatically overwrittes the LOG file but it is not happended and result in
LOG FULL error.
I have run the below statement
DELETE FROM <TNAME> -- Conatins 1 GB data and is not able to store the before image in .LDF file which is of 6MB restricted growth.
I feel, irrrespective of recovery model , until the txn is completed, the LDF file keeps increasing and once completed only then it can be re-used by other transaction.
Am i right?????????
I asked this question bcz i felt when 70% of logfile reaches , then .LDF will be overwritten. but it never happened. The ldf is growing and growing till the txn completes.
Also, i understand why the sql server is trying to expand the log file. Meaning, the user my rollback the txn at any time and that could be the reason why logfile is increasing and does'nt get overwritten.
Correct me if am wrong.
My question is if am running a huge transaction which kills my entire drive space and eventually end up in logfull ERROR i thought SIMPLE recovery model will solve my problem.
How to deal with such scenario's??? How can we avoid such kind of errrors in future..(i.e running a ttxn for 6 - 8 hrs and ending upin logfull errors).
I feel one solution is keeping the transactions short. Other solution is shrinking the logfile/database. What can be other ideal solution for this so that whenever i can calculate this txn may take this much amount space and move the .LDF where amount of
disk space is available...
Can i use this sp_spaceused 'tablename', get the data size and estimate the logsize ????? Is that a solution to fix the log full error problems??????????
Thanks in Advance.
January 23, 2010 at 10:40 pm
mahesh.vsp (1/23/2010)
I feel, irrrespective of recovery model , until the txn is completed, the LDF file keeps increasing and once completed only then it can be re-used by other transaction.
That's essencially correct. That's why limiting the log file to only 6MB on a 1 GB database is a very bad thing to do.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 24, 2010 at 12:53 am
mahesh.vsp (1/23/2010)
I thought, whenever i say SIMPLE recovery model, when log file reaches 70-80% , then automatically overwrittes the LOG file but it is not happended and result inLOG FULL error.
No, in Simple recovery, when the log reaches 70% a checkpoint will run. That will mark inactive portions of the log as reusable. If there's an open transaction (like that large delete) then the parts of the log being used for that delete cannot be made inactive until the transaction is complete.
The entire delete must be logged completely and all of those log records retained until the end of the delete, otherwise it would not be possible to roll the transaction back, and that would have nasty consequences.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 24, 2010 at 10:11 pm
For completeness sake:
In the long term (apart from the current situation with the long delete transaction), the log size will be maintained reasonably small by regular FULL backups (if the recovery mode is Simple) or by transaction log backups (if recovery mode is Full).
Shrinking the log file is not recommended except for one-off situations (such as the one being discussed here) where a one-off action causes the log file to grow excessively.
January 24, 2010 at 11:20 pm
Ol'SureHand (1/24/2010)
the log size will be maintained reasonably small by regular FULL backups (if the recovery mode is Simple)
Errr, no.
What allows for log space reuse in Simple recovery is the Checkpoint operation, not a backup. Yes, afaik, running a backup forces a checkpoint, but it's still the checkpoint that does the work on the log, not the backup.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 25, 2010 at 12:05 am
If you're just deleting all the data, you could Truncate Table instead, although obviously if you've got triggers or foreign keys that's a no-go.
One thing I've done in the past, purely to avoid the pain of huge transactions, is to loop around deleting in blocks of 100 rows or something.
While Exists(Select * From bigtable)
Begin
Delete bigtable
Where id In (Select Top 100 id From bigtable)
End
Don't know if that helps. 🙂
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply