January 23, 2019 at 6:41 am
A lot of times, when developing SSIS packages, I find myself adding tasks and components to help handle the situation if something goes wrong. For example, when importing a flat file, I always add a script task to make sure the file exists, so if its not there, the control flow goes around any tasks related or dependent on that particular file. When dealing with FTP transfers, I usually set up the job that runs the package to run multiple times during the day in case the source or destination server is temporarily unavailable, and always make sure the file(s) are successfully uploaded/downloaded before deleting them. I'm also known for trying to import files into staging tables so that if there is bad data, or the file was formatted incorrectly, I don't harm the actual production table.
I call this Coding for Possible Failure (being as I am a bit of a glass half empty kind of guy)
What would you call it?
Luther
January 23, 2019 at 6:47 am
I call it Coding for Eventual Failure. Every process will eventually fail. Like you, I tend to code pessimistically as well.
Tim Mitchell
TimMitchell.net | @tmitch.net | Tyleris.com
ETL Best Practices
January 23, 2019 at 6:57 am
I call it "smart coding" 🙂
I think developing ETL processes in such a way is a good approach, as long as you don't take it too far.
As opposed to just taking a pessimistic development style with everything, I've more recently started looking at it from a critical/non-critical path perspective. For instance, if your FTP process is downloading a file that is critical to the overall process then failing the process when something goes wrong is exactly what you want...if not, then you can just side-step it (with a notification of course) and carry on.
All of this depends on the data you're dealing with and the use case. A financial data mart may require you to be more pessimistic than another.
January 29, 2019 at 1:27 am
I would also call it something between smart coding and coding for eventual failure (smart coding for eventual failure?). Some of these mentioned things like Staging Tables should be (and mostly are) considered good practice, other things like "File Exists Checks" you might learn through stupidity of others rather quickly. 🙂
January 29, 2019 at 6:16 am
Martin Schoombee - Wednesday, January 23, 2019 6:57 AMI call it "smart coding" 🙂I think developing ETL processes in such a way is a good approach, as long as you don't take it too far.
As opposed to just taking a pessimistic development style with everything, I've more recently started looking at it from a critical/non-critical path perspective. For instance, if your FTP process is downloading a file that is critical to the overall process then failing the process when something goes wrong is exactly what you want...if not, then you can just side-step it (with a notification of course) and carry on.
All of this depends on the data you're dealing with and the use case. A financial data mart may require you to be more pessimistic than another.
You've put a more positive spin on the process by changing the terminology. Makes it sound less depressing for sure.
January 29, 2019 at 9:37 am
It's just good coding. Checking for potential and common problems is what we should all do and rarely do. If there's one good thing about the cloud, it's that these things occur way more often, so the code we ought to be writing is what we need to write now.
January 29, 2019 at 12:14 pm
Funny - I call that "building a resilient process".
😎
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
January 29, 2019 at 7:40 pm
I agree with what the others have said but I just call it "Bullet-Proofing".
--Jeff Moden
Change is inevitable... Change for the better is not.
February 10, 2019 at 1:56 pm
I agree with what the others have said but I just call it "Bullet-Proofing".
Exactly right. I'm not coding for failure, I'm coding to prevent an issue from bringing the ETL process to a halt. Errors and inconsistencies get logged and reported. Each process has a validation step soon after extraction which checks for all reasonable errors. Every now and then I have to add a new check. Rows that fail a vital check are logged and flagged to prevent further processing.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply