April 17, 2018 at 4:26 am
I decided to move with a different approach:
I have an SSIS package with two different connections and two different file flat file sources inside the same data flow.
One connection try to read the file with RES column and the other Without the RES column
Both of the flat files sources connect to the same UNION ALL.
Issue is that if one of the sources gets an error, then the other is not able to move data after the union all 🙁
I was expecting that, if one source fails and the other sucesses (as it is occurring) then I will be able to continue. But problem is that even with datasource 2 succeding and sending 3 rows into the union all (as per the data viewer) then data is not moved from union all into the next task.
Can you help me understand how can I change this?
Thank you
April 17, 2018 at 4:37 am
Personally, i think Phil's suggestion earlier is the winner. Check if the rogue column exists first, if it doesn't use the dataflow that doesn't require it, if it does, use the one that does have it.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
April 17, 2018 at 6:35 am
Thom A - Monday, April 16, 2018 9:20 AMriver1 - Monday, April 16, 2018 9:15 AMNow my problem is. I would not like that if Dataflow 1 fails that the package fails. Instead I would like to wait for the result of data flow 2 to give the package as failed.Can someone tell me how can I do this?
Basically:
If data flow 1 sucess then package sucess
If data flow 1 fails move to data flow 2 but don't give the package as failed
if data flow 2 fails give the package as failed otherwise (if sucess), as sucessfull.Thanks
Change the value of the FailPackageOnFailure property on the DataFlow task to False.
Thank you very much for your reply. I cannot do that because both flat file sources are in the same data flow. So what I have is: One SSIS package with two connections, one named WithResColumn and the other named WithoutResColum. Then, inside the same data flow I have two flat file sources. One pointing to one connection and the other positing to the other. Both flat file sources connect to the same UNION ALL task. I would like that if one fails the other would continuo. Problem is that, If one of the flat file sources fails, then , even the other succeding , the data is not passed from the union all onwards. What can I do?
April 17, 2018 at 6:41 am
Thom A - Tuesday, April 17, 2018 4:37 AMPersonally, i think Phil's suggestion earlier is the winner. Check if the rogue column exists first, if it doesn't use the dataflow that doesn't require it, if it does, use the one that does have it.
That will require me to have to separated data flows with the same code. So not so easy to maintain.
I would prefer to have just one Data Flow with two different data source flat files inside.
This is way I would be very pleased if I understand what can I do to guaranty:
Both data sources connected to the same Union all. If one fails, the other should continue the process without problem... Problem is that at this stage, if one fails the other is not able to proceed afetr the union all task 🙁
April 17, 2018 at 6:48 am
Thom A - Tuesday, April 17, 2018 4:37 AMPersonally, i think Phil's suggestion earlier is the winner. Check if the rogue column exists first, if it doesn't use the dataflow that doesn't require it, if it does, use the one that does have it.
That will require me to have to separated data flows with the same code. So not so easy to maintain.
Well, yes, but I checked what you wrote in an earlier post:
I have created two connections, one called "ResColumn" and Other "WithoutResColumn"
Then I have created for each a data flow.
So ... each connection has a data flow. I was just using the information which you told us.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
April 17, 2018 at 6:53 am
river1 - Tuesday, April 17, 2018 6:41 AMBoth data sources connected to the same Union all. If one fails, the other should continue the process without problem... Problem is that at this stage, if one fails the other is not able to proceed afetr the union all task 🙁
Creating a package containing components which are certain to fail, every time it runs, is bad solution architecture and I advise against it. There are multiple alternatives which will run successfully without throwing errors, as we've already mentioned:
1) Pre-process the input file to ensure that it always contains the expected columns, then feed that into your data flow.
2) Use a Script Component to handle the varying columns
3) Use precedence constraints and two data flows to handle the varying columns
4) ... I'm sure there are other ways too
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
April 17, 2018 at 7:00 am
river1 - Tuesday, April 17, 2018 6:41 AMThom A - Tuesday, April 17, 2018 4:37 AMPersonally, i think Phil's suggestion earlier is the winner. Check if the rogue column exists first, if it doesn't use the dataflow that doesn't require it, if it does, use the one that does have it.That will require me to have to separated data flows with the same code. So not so easy to maintain.
Well, yes, but I checked what you wrote in an earlier post:
I have created two connections, one called "ResColumn" and Other "WithoutResColumn"
Then I have created for each a data flow.
So ... each connection has a data flow. I was just using the information which you told us.
Also, as you've not needed to manually code your source, you won't have 2 separate data flows you need to code, you'll have 2 with a simple flat file source in. They are very easy to maintain.
It seems you've overcomplicating the matter by wanting to use a script task/component. If you take that part out of the equation, it becomes a lot easier.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
April 26, 2018 at 10:50 am
I've been around long enough to remember when reading from a data file was just something you were expected to know how to do if you wanted to call yourself a programmer. So if you can expand that into getting through the learning curve of writing an SSIS script source, checking for the extra column and ignoring it involves adding about three lines of code.
Torturing a bunch of package tasks and variables and precedence constraints and error settings into dealing with it boggles my mind. It reminds me of a Family Circle cartoon with Billy running all over the place leaving a trail of dotted lines, while overlooking the simple task he was trying to accomplish in the first place (for those of you old enough to get that reference).
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply