Database Simple Recovery Model issue

  • Hello Everyone,

    I have a SQL Server 2008 database that is in Simple Recover Mode. Now, I am doing update operation on a table which involves about 100 Million rows to be updated. This causes my Log file to grow to about 40 GB or more. Because my database is in Simple Recovery Mode ofcourse I can't take log backup which will truncate the log and reduce the log file size.

    Due to this bulk operation the log file grows to such a big size and causes drive space issue. Is there any option to avoid this or I have to manually shrink the log file after each such bulk operation.

    Thanks in advance.

  • yes, file shrink is manual operation.

    Recommend you to split the single update statment into multiple by adding appropriate where clause.

  • do your updates in batches; after each batch, the simple recovery model will checkpoint, and will prevent the log from growing out of control;

    the key piece is you need a WHERE statement in the delete to prevent the same rows from being -reprocessed over and over;

    SET ROWCOUNT 50000

    WHILE 1=1

    BEGIN

    UPDATE dbo.myTable

    SET MyField = SomeValue

    WHERE MyField = My Condition

    IF @@ROWCOUNT = 0

    BREAK

    END

    SET ROWCOUNT 0

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Thanks Vidya and Lowell for your quick replies.

    Due to complexity of the update operation, I am not able to split them in batches. will it be a good option if I convert my database in Bulk-Logged Recovery mode for time being?

  • No. Operations that are minimally logged in bulk-logged are minimally logged in simple recovery too. In bluk logged you have to additionally worry about log backups.

    Besides, updates are not minimally logged operations. Check BoL for the few operations that are.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • changing the model won't make any difference in the growth, i think; millions of rows of logging to undo the operation just in case it fails will occur regardless. one big operation=big growth;

    you can shrink afterwards, but if it runs out of disk space, it'll error out and undo.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Okay, let me see what I can do here... Thanks everyone for the help...

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply