May 30, 2014 at 1:49 am
There are some flat files we are trying to give to another organisation.
Due to the nature of the data (lots of free-text fields, some funny characters, many, many carriage returns), the column terminator of ^C^ is used, and the row terminator of ^|^ is used.
This, mostly, works fine, but I have come across one file where the row terminator is split across two lines.
When the bulk insert process is run, the new line is read as another character, so the row terminator isn't recognised.
I'm wondering if there is any way I can get BULK INSERT to ignore new line characters in the source?
SQL SERVER 2008 R2
MICROSOFT SERVER 2008
June 1, 2014 at 6:22 pm
UPDATE
The code used for this output is an Oracle Spool Command (don't ask, it's a long story).
The linesize property in the Spool Command was set to 2000, where as it should have been 5000.
This should mitigate the line delimiter being split over two lines, but it would be nice to have a solution to the original problem.
Thanks
June 2, 2014 at 6:06 am
pogla.the.grate (6/1/2014)
UPDATEThe code used for this output is an Oracle Spool Command (don't ask, it's a long story).
The linesize property in the Spool Command was set to 2000, where as it should have been 5000.
This should mitigate the line delimiter being split over two lines, but it would be nice to have a solution to the original problem.
Thanks
Actually, that would be the solution for the original problem. Regardless of system, most import systems simply won't tolerate stray characters within any multi-character delimiter.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 2, 2014 at 6:12 am
The only way I can think of to process data with split row delimiter is to BULK load the data as CLOB using OPENROWSET and then parsing the data.
Without knowing volume this could be costly in performance.
Far away is close at hand in the images of elsewhere.
Anon.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply