December 28, 2016 at 10:10 am
Hello all,
I am using a text file as a source file and destination is a sqlserver database 2012 table. The input file has only two columns and data is laid out properly till a million rows. However from that line the input data is totally messedup. There are total of 2mil rows in the input file. I am using an SSIS package to import this data to the table and at the millionth line I am getting the error:
[Flat File Source [15]] Error: Data conversion failed. The data conversion for column returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
How can this be fixed? here is how the data in file appears:
number, date
1112223333, 2016-12-28 00:00:10.000
1154442222 , 2016-12-28 00:00:10.111
this is how it appears and is good till the millionth row and then
3337778888:09888:1234:00:20151123,2015-11-23,00:00:00.101,
660606975628/42998580:
5781270:046270
815103/475065120003515510986270
815103/475065120003812763374420606975628/42998580:
5781270:0:00206282765135242748236803641610010/756/727142586270506518::0041056
55756493455595
1/47,46407
5575649507503647633714071/8/775033619004109/429985803:64,908968715703064435155109091:055/
550
9/19325746420224,9089687157030644351562
50065976358/242:0269:/0,10
1222/:49076358/242:0269:/0,10
1559
76337148 30
1661507::9348004961208980203/835/10748535474003561974580376337021172760:8362
5575649:,799
81
011506861695258
5575649:,2159,89561,76/
53428371493704,2159,0850357567/80/2540007709123677433/536197513797:742554897551191508 30
42625551467065971707351201/952555975020559
00079::9:1489,6212806:00:0/822
500/1489,68887551896493450
50/0:555064817745321
040710550459775056257006607604376191020907700
6649544050446400846/242:0269:/0,10
15599102652477054715603/3064540780:4973/58:114,96756210209002441:10463580376337021172760:8362
55756276337021172760:8362
all the way till the end without a specific pattern or format. so i am able to insert the million rows without an issue but the package fails with error shown above.
Your inputs are highly appreciated.
Thanks in advance
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
December 28, 2016 at 10:40 am
There's not much that you can do to fix bad data, especially if there is no apparent pattern to the bad data. You should go to the people who gave you the file and ask them to fix it.
In the meantime, you can update the error handling on your data flow to either ignore the error on truncation or redirect your rows instead of failing. https://msdn.microsoft.com/en-us/library/ms140083.aspx
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
December 28, 2016 at 1:18 pm
What do you mean by 'fixed', exactly?
Do you just want to ignore the bad rows, or something more?
An alternative approach to Drew's is to read the file in as a single column and then parse out the columns later in your data flow.
Use either
a) a derived column + conditional split or
b) a script component
to do the parsing, followed by redirection of bad data to an unused output.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply