June 13, 2017 at 8:29 am
I am comparing two flat files and the i am capturing change data.
fileA FileB
id,name,loca,city id,name,loca,city
here key column is id,name,loca that i am comparing. the problem is some data comes with duplicate keys but rest of the fields r different.
example.
FileA FileB
1,rav,abc,richmond 1,rav,abc,richmond
1,rav,abc,rnyc
1,rav,abc,nj
now in conditional split i put change condition (key column(id,name,loca)==key(id,name,loca) &&(city!=city|| other fields!=other fields)
boz of this duplicate my numbers of count are not matching
what i want to do is if both records are matching i need to remove that from both the files and then move with remaining records.
in 1 data flow task.
please help
June 19, 2017 at 7:21 am
You could try this. Use a Merge Join Task and set it up as a Full Outer Join taking the input from File A and File B. Make sure all of the fields are selected for output from File A and File B. Make sure the join is on all of the fields.
Now add a conditional split using ISNULL(FieldName) == True to find records that don't match. The one expression for the A side and one for the B side so you get two outputs of unique records from each.
Now add a Merge to create one data set from the two outputs from the Merge Join.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy