January 8, 2017 at 11:21 am
Folks,
Hav a good day.
i need a clarification regarding usage of checkpoint in SSIS.
i set a checkpoint for single task. in this task , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row..?
Kindly explain it.
Thanks in advance.
Regards
Kannan
January 8, 2017 at 12:39 pm
Kannan Vignesh (1/8/2017)
Folks,Hav a good day.
i need a clarification regarding usage of checkpoint in SSIS.
i set a checkpoint for single task. in this task , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row..?
Kindly explain it.
Thanks in advance.
Regards
Kannan
It will restart from row 1. This quote from BOL clears up any doubt:
A package can be restarted only at the control flow level. You cannot restart a package in the middle of a data flow.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
January 8, 2017 at 4:03 pm
If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.
1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.
2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
January 8, 2017 at 5:54 pm
Many thanks.!
January 9, 2017 at 4:09 pm
Thom A (1/8/2017)
If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.
2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.
From the description ...
, trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row.
it appears he simply wants to know if a fail occurs, like at a dataflow, if it will start at the row that errored; when the package gets rerunned.
---------
One thing you can do is redirect error rows to a different destination. It is not unusual for packages to be designed this way. You can later re-import when the problem is fixed by re-running your package*
*It is strongly recommend that your packages implement an acceptable way to handle potential duplication (Like delete at the source or at the destination prior to the ETL move, depending on what makes sense in each particular situation).
----------------------------------------------------
January 10, 2017 at 2:05 am
MMartin1 (1/9/2017)
Thom A (1/8/2017)
If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.
2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.
From the description ...
, trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row.
it appears he simply wants to know if a fail occurs, like at a dataflow, if it will start at the row that errored; when the package gets rerunned.
---------
One thing you can do is redirect error rows to a different destination. It is not unusual for packages to be designed this way. You can later re-import when the problem is fixed by re-running your package*
*It is strongly recommend that your packages implement an acceptable way to handle potential duplication (Like delete at the source or at the destination prior to the ETL move, depending on what makes sense in each particular situation).
If it were completely a Dataflow though, then SSIS should roll back any inserts it had already done if one failed. I therefore felt that this is likely using a combination of data and task flows, as there would then be need to start at a point in process.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
January 10, 2017 at 8:15 am
If it were completely a Dataflow though, then SSIS should roll back any inserts it had already done if one failed.
By default, yes, but not necessarily.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply