Large Table Insert/Update Performance

  • Hi,

    I'm truncating Table-A and inserting 80 million records from table-C and also updating table-A{non index columns) .  Table -A is having clustered index which is created along with primary key.  This process is taking more than 24 hours to complete.  I saw the waittype in sysprocesses table - most of the time it is -PAGE_IOLATCH_EX; wait time varies 0-1250ms.

    Server config : 4GB RAM; Processor :4; RAID-5

    Is there any way to improve the performance of the above process?.   Will dropping index  would help?.

     

    The code would be like this:

    Begin Tran

    Truncate Table A

    insert into TABLE A from select * from table-B where conditions;

    update table A

     commit tran

     

    Any suggestions will be highly helpful.

     

    Thanks

    Karthik.

  • If I'm understanding what your doing correctly, I'd try the insert with the source select ordered by the column that is used in the clustered index. Try it with the index in place and dropped and see which works best.

  • Are you allowed to change your database recovery mode to BULK LOGGED?

  • I've found when moving large amounts of data batching tends to do it quicker, and usually setting the batch as a stored proc and calling it multiple times is quicker too. Only know from moving millions of rows of data about !!!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Hi,

    I'm not sure, what are your demands, but there are some thinks you can do to.

    1. Check the Where conditions and indexes in table B (source table).

    2. DROP all indexes in destination table and recreate them on the end of the process.

    3. If you can try using export data to file, import from file logic.


    First rule of debugging:
    No Code, No Bugs

  • Several things you need to do to ensure you get peak performance for this type operation.  One is to add a RAID 1 and place the log files on that. You also may want to think about replacing the Raid 5 with a Raid 10 for the data.  If you are truncating the table and adding 80 million rows you should think about just creatign a new table. If you drop tableA you can then use a SELECT INTO from tableC to create a new table A.  If you are in Bulk Logged or Simple recovery mode you will get a minimally logged load which will dramatically speed up the loading.  Then create the indexes clustered index first.

  • The code posted is wrapped in a transaction so I'm assuming it has to be done in one batch. If it doesn't, I'd agree splitting the process into several batches would certainly help on top of ordering the source select.

  • I agree.  I didn't catch the Begin tran part.  Why wrap it in a tran at all since you can esily redo the whole thing if there are any issues.  Get rid of the Begin tran.

  • Yup sorry missed the tran too .. you can still do the batches in transactions. Big transactions are a pain - always were in 6.x and still aren't much better. One trick I have learned is that if your data or log file is growing during the operations then extending the size to cater for growth first will save you lots of time - ah shades of 6.x < grin >

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Using bcp out to a file and a BULK INSERT back in to Table A is a good suggestion. Also good is the suggestion to put the transaction logs on a separate RAID 1 drive; this can be extended further by using additional RAID 1 drives for tempdb, and perhaps one or more of these humungous tables. RAID 5 suffers from much poorer write performance than RAID 1.

    Depending on what TableA is used for, it may also be beneficial to get rid of it... Could it be replaced by a view on Table B, perhaps joined to a new table C, consisting of the key plus the updated columns? Depending on the subsequent usage, this might eliminate a lot of redundancy. Also see:

    http://www.sqlsavior.com/8HoursTo2.html

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply