transaction logs getting full with large tables during query

  • Table 1 has 15,000 records and table 2 has 1.5 million records.

    My query works fine when I use less records from table 2 in a smaller table. when I try to use the real table 2, my memory runs out.

    I run DBCC ShrinkFile and

    DBCC OPENTRAN

    SELECT recovery_model_desc,

    log_reuse_wait_desc from sys.databases where name = 'myBigDB'

    Results:

    recovery_model_desclog_reuse_wait_desc

    FULLNOTHING

    select tbl1.field1+tbl1.field2 + ..tbl1.field32,tbl2.field1 + ...tbl2.field5

    from table1 tbl1

    inner join table2 tbl2 on tbl1.field1 = tbl2.field1

    order by tbl1.field1

    How do I run such a query from these two tables into another table without running out of memory?

  • I don't understand your question. Do you want to insert data from the 2 tables into a 3rd table or are you just selecting from the 2 tables? When you say you are running out of memory do you mean RAM or hard disk space? Shrinking the log file will not free up RAM and in the long run will slow down the process as the log will have to grow to the same size if you re-run the transaction.

    Check out the links in my signature line to see how to best post a question to get an accurate answer.

  • Yes, I'm selecting and concatenating all fields from both tables and inserting to a third one with only one column - that's how the client want it.

    It actually eats up both RAM and harddisk space.

  • In a situation like this I would batch the process. Do you have a unique id you can use as part of the process? Something like PK_Table_1 and PK_Table_2 that you put in the new table so you can left join on it. Then between batches do a TX Log backup to free up space in the TX log so it doesn't continue to autogrow.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply