April 19, 2010 at 11:49 am
Hello Everyone,
I have a SQL Server 2008 database that is in Simple Recover Mode. Now, I am doing update operation on a table which involves about 100 Million rows to be updated. This causes my Log file to grow to about 40 GB or more. Because my database is in Simple Recovery Mode ofcourse I can't take log backup which will truncate the log and reduce the log file size.
Due to this bulk operation the log file grows to such a big size and causes drive space issue. Is there any option to avoid this or I have to manually shrink the log file after each such bulk operation.
Thanks in advance.
April 19, 2010 at 11:55 am
yes, file shrink is manual operation.
Recommend you to split the single update statment into multiple by adding appropriate where clause.
April 19, 2010 at 11:56 am
do your updates in batches; after each batch, the simple recovery model will checkpoint, and will prevent the log from growing out of control;
the key piece is you need a WHERE statement in the delete to prevent the same rows from being -reprocessed over and over;
SET ROWCOUNT 50000
WHILE 1=1
BEGIN
UPDATE dbo.myTable
SET MyField = SomeValue
WHERE MyField = My Condition
IF @@ROWCOUNT = 0
BREAK
END
SET ROWCOUNT 0
Lowell
April 19, 2010 at 12:01 pm
Thanks Vidya and Lowell for your quick replies.
Due to complexity of the update operation, I am not able to split them in batches. will it be a good option if I convert my database in Bulk-Logged Recovery mode for time being?
April 19, 2010 at 12:04 pm
No. Operations that are minimally logged in bulk-logged are minimally logged in simple recovery too. In bluk logged you have to additionally worry about log backups.
Besides, updates are not minimally logged operations. Check BoL for the few operations that are.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 19, 2010 at 12:04 pm
changing the model won't make any difference in the growth, i think; millions of rows of logging to undo the operation just in case it fails will occur regardless. one big operation=big growth;
you can shrink afterwards, but if it runs out of disk space, it'll error out and undo.
Lowell
April 19, 2010 at 12:07 pm
Okay, let me see what I can do here... Thanks everyone for the help...
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply