January 23, 2008 at 5:23 am
Hello Folks,
Im trying to use Integrations services to import a file on a daily basis.
My flatfile has to different type of rows that comes from an transaction system, it looks like this:
132,1/1/2008,00,123654,text,1,123.00
132,1/1/2008,00,123652,text,1,23.00
132,1/1/2008,00,123655,text,1,3.00
123,1/1/1/2008,01,149.00
1125,1/1/2008,00,123654,text,1,123.00
1125,1/1/2008,00,123652,text,1,23.00
1125,1/1/2008,00,123655,text,1,3.00
1125,1/1/1/2008,01,149.00;Cash;EUR;01;12
After the date you can see that there is two digits number either 00 or 01. The rows also have a different lengthts.
When ever that columns contains 00 the line should be inserted to a special text file, if the columns contains 01 it should to another file.
How can I solve this in a good way?
One of the problems I have is that when I try to import the rows the flat file connections indicates(erros message) that I have partial row in the file which is true since the the rows with the columns content 01 have more fields then the other.
Thanks for you help.
holtis
January 23, 2008 at 6:38 am
I hope you have types your sample data incorrectly, because you have 1/1/1/2008 for what looks like a date field and in your last line, you have semi-colon delimiters for the last few fields (and comma everywhere else).
Ignoring that problem, SSIS does not like data sources that change types and column numbers on a row-by-row basis. The most elegant solution may be to create a custom data source component that reads the file and produces two data streams - one per line format.
If you do not want to get into creating a custom component, you can do the same with a script component. You can parse the data file row-by-row, determine the record type, and then send each row to the appropriate output. This is probably not going to be very fast, but with some text parsing routines and a bit of creativity, it can be done somewhat efficiently and it would not take a lot of code.
Last, if you do not want to write any code and want to use basic SSIS components, you can make the file source a Ragged-Right file. If your smallest number of fields in a row is 7, make the last column a ragged-right so your longer lines will end up with all of the remaining fields in the last field separated by your delimiter. Then, you can use a conditional split to send any row with a comma in the last field to a derived column component to parse that field into the appropriate columns. You do not get much of a fast-parse option with this, but only the longer records will go through this derived column component so it may not be that much of a performance issue.
January 23, 2008 at 6:45 am
Holtis,
That's a good question... what is the 1/1/1/20008 notation? Is it real or just a cut and paste error?
--Jeff Moden
Change is inevitable... Change for the better is not.
January 23, 2008 at 7:09 am
Setup a Flat File source to read into a single column. Connect that to a Script Component which will split the single column into multiple columns. Connect that to a Conditional Split transformation, using the '00' and '01' column as the condition. After that, add two data flow destinations and connect the Split Transformation to each one.
See this article for detailed steps: http://www.sql-server-performance.com/article_print.aspx?id=1056&type=art
January 23, 2008 at 7:28 am
Nice reference link, Todd... thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 23, 2008 at 7:34 am
its a "write and paste" error...
thanks
January 24, 2008 at 10:26 am
thanks that solved it
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply