Bulk insert is so slow ! See my code

  • Hi All

    I was wondering why my bulk insert is so slow, can someone please look at the code below:

    BULK INSERT  TEST..[TEST] FROM 'path\fILENAME' WITH ( FIELDTERMINATOR = '/', DATAFILETYPE= 'TEST', ROWTERMINATOR = '\n', KEEPIDENTITY TABLOCK)

    Thanks


    Kindest Regards,

    John Burchel (Trainee Developer)

  • Hello John,

    The statement given by you is syntactically wrong. It should be as

    BULK Insert <database name> . <owner> . <table name> from <data file>

    with (FIELDTERMINATOR = '/', DATAFILTYPE = {char / native / widechar / widenative}, ROWTERMINATOR = '\n', KEEPIDENTITY, TABLOCK)

    Check once again BOL (Books Online)

    Thanks and have a great day!!!


    Lucky

  • The statement I gave might be wrong, as I intentionally done this, because i was in a hurry. But i was wondering if there is any hint/option that can be added to it ?


    Kindest Regards,

    John Burchel (Trainee Developer)

  • Did you try to run a DTS for the same set up?  DTS in my findings usually run a bit faster then BULK INSERTS, with the same ending.

    DHeath

  • standard advice: check indexes and triggers.

    Large clustered indexes are to be avoided. (I just greatly increased the speed of inserts on my database by reducing my clustered index to one key field...)

  • There was a Clustered Primary key index on the table, is this a good or a bad thing ?

    Performance booster or Killer ?


    Kindest Regards,

    John Burchel (Trainee Developer)

  • John,

    ANY index on the target table is a performance killer. If an index is present, BCP is forced to log every row insert.

    Without any indexes, and if the database is in "BULK INSERT" or "SIMPLE" recovery model, then BCP will use minimal logging - just enough to recover the database in the event a disaster, but nothing compared to full logging and hence a much lower overhead.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply