inserts are slow, please help

  • Hello,

    I want to insert a large number of records (e.g. 500,000 rows).

    I want to tune my query.

    I am joining T1 with same table T1 and inserting records, which results in almost double the records.

    The fields values in the inserted records change based on some condition.

    I am using transactions. I tried doing the same batch wise. It did improve the performance but its taking like 3-4 hours(unlike more than 6+ hours) which is still unacceptable.

    I checked the indexes as well.

    Can you please give me ideas.

    Thanks in advance!

  • 1. Is this inserting T2 to T1 a recurring event or a once off?

    2. Is T2 structure a mirror of T1 structure?

    | If in Doubt...don't do it!! |

  • Can you post the table structure and insert qry and what kind of indexes you have.

    With these details we will be in a better position to comment

  • You can try dropping indexes and adding them back after the insert. That sometimes helps.

    If you are adding a lot of records, it's possible that you're overloading the I/O system, which means that you aren't going to get more speed without fixing that.

    If you post more details, and an execution plan, we might be able to help. You might also try to check the disk Qs and reads/writes / sec when the insert is happening.

  • thlubbe - 1) The insert take place one in a month but as I mentioned I get a new set of 500,000 records and I add about 500,000 more which increases the size tremendously

    2) Did not get the Q. I am inserting the data into the same table so T2 = T1

    arr.nagaraj - The table has approx. 120 columns and i don't have access to system now. I have discussed the indexes with my seniors and DBA here. Everything seems to be fine.

    Steve Jones - I am not a very experienced developer.

    I have to insert records on the basis of join and some other conditions ( where clause). The records which I am joining is 500,000 but the table has millions of rows. Will dropping indexes really help.

    Note : I am home now and i don't have access to the system.

    Thanks for your help and waiting on some more replies!!

  • Hi

    Some advices that have helped me to insert huge amounts of records in minimal time

    - change the database recovery mode to bulk_logged so that all the changes are minimally logged

    - drop the indexes in the target table

    - if you are sure that the data you are inserting is clean and comply with all constraints you can even drop the constraints in target table

    - then start transaction and insert with the hint tablelock if possible. use bulkinsert whenever possible (seems you cant)

    - be sure to recreate constraints and indexes at the end and change database recovery mode again to full

  • Might be your worth reading this article on SSC

    http://www.sqlservercentral.com/articles/Large+Data+Sets/68930/">

    http://www.sqlservercentral.com/articles/Large+Data+Sets/68930/

    | If in Doubt...don't do it!! |

  • A broken link

    ---------------------------------------------------------------------------------

  • It would help if you show us what you are doing. Provide us with the DDL for the table(s) (CREATE TABLE statements) involved, some sample data the demonstrates what you are trying to accomplish (as INSERT statements that can be cut, paste, run to load the table(s)), expected results based on the sample data, and the code you have written to accomplish this task.

    Also, the Actual Execution Plan saved and uploaded as a .sqlplan file would also help.

  • Thank you all, for your help.

    Modified the queries and added one index (a composite one) which has increased the performance unbelievably.

  • Nabha (1/5/2010)


    A broken link

    HHmmmm....

    Try Now:

    http://www.sqlservercentral.com/articles/Large+Data+Sets/68930/

    | If in Doubt...don't do it!! |

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply