May 29, 2008 at 6:54 am
bulk insert seems to ignore every second linebreak. consequently, it only imports 3 records (out of 8). The other records are squashed into one column in the target db.
Any ideas? I don't what do do. I have tried different rowterminators char(10) char(13) but no success. I have tried omitting the rowterminator too, and firstrow attribute too. Every time it ignores every second linebreak.
Any help will be greatfully appreciated!! thanks in advance.
I have attached the file. In Notepad it shows a neat file with 8 records (plus one header).
(the table script below)
CREATE TABLE [dbo].[Gen_ExcelImports](
[f1] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f2] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f3] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f4] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f5] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f6] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f7] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f8] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f9] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f10] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f11] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f12] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f13] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f14] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f15] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f16] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f17] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f18] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f19] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f20] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f21] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f22] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f23] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f24] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f25] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f26] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f27] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f28] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f29] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f30] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f31] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f32] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f33] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f34] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f35] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f36] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f37] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f38] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f39] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f40] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f41] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f42] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f43] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f44] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f45] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f46] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f47] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f48] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f49] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL,
[f50] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL
) ON [PRIMARY]
May 29, 2008 at 9:28 am
SSIS can preview the file just fine and I opened it up in notepad++ and didn't see any issues with visible and special chars
May 29, 2008 at 9:52 am
skipping input rows is a classic symptom of a mismatch between the number of fields in the table and the data file.
open the .csv file you provided in Excel, and you'll notice that PROMOTION_CODE is field #47 but your table has 50 columns. the missing 3 fields will be populated by the first 3 columns of the next row and makes it seem like the row is being skipped.
you'll either need to provide the missing fields or use a format file.
May 30, 2008 at 1:08 am
hi thanks Old Hand, it appears you are right.It is because of the mismatch in columns.
The mismatch was intentional. I was hoping to create a generic staging table, for different csv formats, from which the data would pulled into the other db tables via a mapping table with entries specific to each file format. Hence the fact that the table had more columns.
I am now kind of getting around it, by importing the whole file into a single table column, from where I use T-SQL to process it further.
thanks anyway.
December 17, 2008 at 1:47 pm
I have a similar problem. How do you work around it with a format file?
December 19, 2008 at 7:49 am
using format file for missing fields is explained very well in this article http://msdn.microsoft.com/en-us/library/ms179250.aspx
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply