September 17, 2008 at 9:07 am
I have a data flow task, which extracts the data from a flatfile source (.csv file) does some transformation (derived column) and loads the data into my OLEDB destination(SQL table).
As of now, its works fine. But we do expect some bad data in our souce files. So, I want to separate good data and bad data into two separate SQL tables.
How can I modify my package and how is the strucuture of my bad data table going to be ??
Thanks
September 17, 2008 at 9:24 am
How will you identify the bad data? Will it be based on an error when inserting into the destination?
If you will be identifying the bad data based on an error you can just use the Error Output to push the data into another table. You would need to determine the structure of the table.
Here is an article I wrote to about handling data errors in a data flow: http://www.sqlservercentral.com/articles/Integration+Services/62662/
Make sure you check out the discussion as there is a custom component attached to the discussion that does the error formatting that is discussed in the article.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
September 17, 2008 at 9:25 am
Hi,
I am also not an expert in this, but I guess you will have to use Fuzzy Lookup Transformation first to know which data is bad and which is good and then use Conditional Split Transformation, where you can have two outputs.
But again, I am not getting exactly what do you mean by good data and bad data. Can you post some sample example here, so that anyone of us can figure out a better way to handle your particular situation.
Thank you.
notes4we
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply