T-log too big...need batch inserts

  • I'm copying data from table A to table B. Problem is I'm running out of transaction log space. So I'm thinking COMMIT and CHECKPOINT my INSERTs in smaller chunks, so the T-log can truncate after a batch.

    I'm doing this on a lot of tables, and some do not have any ID fields.

    How do I do this?

  • I am assuming that your current Recovery Model is Full.  You may want to consider switching it to Bulk-Logged temporarity while you are moving your data over.  If you are not familiar with this Recover Model, read BOL for more info.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • During this load, I'm putting the database in SIMPLE and 'trunc. log on chkpt.' to true.

    I thought the solution was to keep a row count in a loop, but I don't have ID fields to track my progress. I'm not sure how to incrementally select batches from a large table.

    Here's the script I'm currently working out.

    SET ROWCOUNT 10000

    DECLARE @rc INT

    SET @rc = 10000

    WHILE @rc = 10000

    BEGIN

    BEGIN TRAN

    INSERT INTO Table_B

    SELECT * FROM Table_A

    SELECT @rc = @@rowcount

    COMMIT TRAN

    CHECKPOINT

    END

  • This script will work in terms of limiting the rows in each batch, but you need a way to determine if the rows already exist in the target table.  As written, this script will insert the same rows over and over again.  Do these tables have any sort of unique key or index defined on them?  You will need to do an outer join between the tables in your INSERT INTO....SELECT statement to only attempt to insert those rows that do not already exist in the target. 

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • Putting the Recovery Mode to Simple sets the 'trunc. log on chkpt.' to true automatically.  Your problem is the log... your Insert/Select is still logged despite the SIMPLE setting you have because of the BEGIN TRAN... why are you using an explicit transaction, anyway?  I don't see any test-for-success code with a rollback...

    If you really want to limit logging and you really want to make this really fast, create and populate the table on the fly using SELECT/INTO which is the other neat thing about the SIMPLE recovery mode... it also set the SELECT INTO/Bulk Copy mode on... you'll be amazed at how fast it copies... really amazed.

    I'm sure there's going to be a lot of folks that warn you about SELECT/INTO making a couple of locks on system tables... it's worth it, though, when you have lot's of millions of rows of data to move like this and you love the idea of transferring a million rows every 4 seconds.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply