July 5, 2006 at 8:28 am
July 5, 2006 at 9:54 am
Hello John,
The statement given by you is syntactically wrong. It should be as
BULK Insert <database name> . <owner> . <table name> from <data file>
with (FIELDTERMINATOR = '/', DATAFILTYPE = {char / native / widechar / widenative}, ROWTERMINATOR = '\n', KEEPIDENTITY, TABLOCK)
Check once again BOL (Books Online)
Thanks and have a great day!!!
Lucky
July 5, 2006 at 10:16 am
July 5, 2006 at 3:11 pm
Did you try to run a DTS for the same set up? DTS in my findings usually run a bit faster then BULK INSERTS, with the same ending.
DHeath
July 6, 2006 at 4:04 pm
standard advice: check indexes and triggers.
Large clustered indexes are to be avoided. (I just greatly increased the speed of inserts on my database by reducing my clustered index to one key field...)
July 7, 2006 at 4:15 am
July 7, 2006 at 5:27 am
John,
ANY index on the target table is a performance killer. If an index is present, BCP is forced to log every row insert.
Without any indexes, and if the database is in "BULK INSERT" or "SIMPLE" recovery model, then BCP will use minimal logging - just enough to recover the database in the event a disaster, but nothing compared to full logging and hence a much lower overhead.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply