disk getting full

  • my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.

  • You could try updating in smaller batches and then perform a log backup in between each of the batches.

    Or you could get more disk space.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Zeal-DBA (4/14/2015)


    my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.

    Further on Jason's suggestion, you could add another log file on another drive whilst doing the update or use a minimally logged select into query with the update to create a new table, replacing the old one.

    😎

  • Use batches as noted by Jason. Not only will it prevent this problem (as long as your t-log backups are set up properly), but it should be faster too.

  • Zeal-DBA (4/14/2015)


    my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.

    My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?

    --DELETING LARGE AMOUNTS OF DATA

    DECLARE @Done BIT

    SET @Done = 0

    WHILE @Done = 0

    BEGIN

    DELETE TOP (20000) -- reduce if log still growing

    FROM SomeTable WHERE SomeColumn = SomeValue

    IF @@ROWCOUNT = 0

    SET @Done = 1

    CHECKPOINT -- marks log space reusable in simple recovery

    END

    Of course, adjust to your schema and change DELETE statement.

  • sql-lover (4/16/2015)


    Zeal-DBA (4/14/2015)


    my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.

    My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?

    --DELETING LARGE AMOUNTS OF DATA

    DECLARE @Done BIT

    SET @Done = 0

    WHILE @Done = 0

    BEGIN

    DELETE TOP (20000) -- reduce if log still growing

    FROM SomeTable WHERE SomeColumn = SomeValue

    IF @@ROWCOUNT = 0

    SET @Done = 1

    CHECKPOINT -- marks log space reusable in simple recovery

    END

    Of course, adjust to your schema and change DELETE statement.

    The last scan will take quite some time on an update if you check the rowcount for 0. If you're doing batched updates using the technique above (or deletes like the above), it's better to use "IF @@ROWCOUNT < 20000" to prevent that last long scan on the updates.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (4/16/2015)


    sql-lover (4/16/2015)


    Zeal-DBA (4/14/2015)


    my update query updating approx 50 million record which is causing transaction log to grow, and filling disk space and finally transaction is failing, please suggestany worl around.

    My standard process for this at work (and my usual advice on SQL Server Central) . Something like this?

    --DELETING LARGE AMOUNTS OF DATA

    DECLARE @Done BIT

    SET @Done = 0

    WHILE @Done = 0

    BEGIN

    DELETE TOP (20000) -- reduce if log still growing

    FROM SomeTable WHERE SomeColumn = SomeValue

    IF @@ROWCOUNT = 0

    SET @Done = 1

    CHECKPOINT -- marks log space reusable in simple recovery

    END

    Of course, adjust to your schema and change DELETE statement.

    The last scan will take quite some time on an update if you check the rowcount for 0. If you're doing batched updates using the technique above (or deletes like the above), it's better to use "IF @@ROWCOUNT < 20000" to prevent that last long scan on the updates.

    Sorry, not following you. I've used this on table with million records and never had any issue. It Performs ok.

    Maybe if you rewrite the code? Or elaborate more?

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply