November 7, 2007 at 11:28 pm
Comments posted to this topic are about the item Parallel Processing of Large Volume ETL Jobs
November 7, 2007 at 11:29 pm
The title of this article is "Parallel Processing of Large Volume ETL Jobs"... what do you consider to be a "Large Volume ETL Job"?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 8, 2007 at 12:17 am
Large volume means number of rows and data size of source(s) that need to be processed into the target system by applying good amount of complex validation/filtration/transformation logics. The "Number of rows" and "good amount of" in the previous sentence depends on the hardware capacity and availability of ETL time window of individual systems. Before considering the idea discussed in article, study need to be done on the project for facts like current/future hardware capacity, how much data parallelism can be applied with current hardware, current/future available time window for etl processing, future complexities in validation logics, etc.
November 8, 2007 at 10:44 am
I think the information you have attempted to communicate is valuable. However, the repeated use of slashes fragmented or run-on sentences made the article almost impossible to read.
November 8, 2007 at 12:38 pm
Ohh, and you teased me. You mentioned SQL Set Theory twice in you introduction and then never addressed it in the body. Now I have to go google SQL Set Theory.
November 8, 2007 at 12:56 pm
This is great timing we were just discussing this in regards to using HaDoop or SQL Server to do some very large, close to terabyte, imports and transformations.
I have a couple questions, and I realize you didn't want to include code but a little start on it would help if possible:
?How do you physically spit up the large files and track them in each thread?
?How is the master process receiving it's messages from the children processes?
?In a SQL SERVER implementation would you most likely use CRL code to break up the files and Bulk Load to import it?
?Any suggestions on foreign keys and indexes?
This is a great topic, it may give us a lead on how to proceed with our new data project.
Very much appreciated!
Skål - jh
November 8, 2007 at 10:15 pm
RAGHAVENDRA NARAYANA (11/8/2007)
Large volume means number of rows and data size of source(s) that need to be processed into the target system by applying good amount of complex validation/filtration/transformation logics. The "Number of rows" and "good amount of" in the previous sentence depends on the hardware capacity and availability of ETL time window of individual systems. Before considering the idea discussed in article, study need to be done on the project for facts like current/future hardware capacity, how much data parallelism can be applied with current hardware, current/future available time window for etl processing, future complexities in validation logics, etc.
C'mon... How many rows and columns per row are you talking about?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 10, 2007 at 4:33 am
We can think of an ETL system where we expect around 5 million transaction coming in with around 25 columns.
I would request if any of the Technical Architects who worked on database sizing and hardware configuration to share their thoughts on the article including parallelism, processors plus this question please.
November 10, 2007 at 7:37 am
Thanks for the rowcount and line size... I've got an experiment or two to do... I'll be back...
--Jeff Moden
Change is inevitable... Change for the better is not.
November 12, 2007 at 2:03 am
?How do you physically spit up the large files and track them in each thread?
?How is the master process receiving it's messages from the children processes?
FOR ABOVE TWO QUESTIONS, IN ARTICLE I HAVE GIVEN DETAILED EXPLANATION.
?In a SQL SERVER implementation would you most likely use CRL code to break up the files and Bulk Load to import it?
NOT SURE ABOUT THIS. I WOULD REQUEST EXPERTS IN THIS PARTICULAR AREA TO ADDRESS THIS.
?Any suggestions on foreign keys and indexes?
NOTHING MUCH THAT I CAN THINK OF NOW, FEW THINGS COULD BE, KEEP MAIN TABLE AND RELATED TABLE INSERTS/UPDATES IN ONE COMMIT/ROLLBACK BLOCK. TAKE CARE OF IDENTITY COLUMNS IN CASE OF ROLLBACK/FAILURE, WE'LL LOSE THE IDENTITY NUMBER IF ROLLBACK/FAILURE HAPPENS. INSERT/UPDATE THE RECORDS IN BULK LIKE THOUSANDS IN CHUNKS IN PREFERABLY TARGET TABLE PRIMARY INDEX ORDER. IN TARGET TABLES, BE STRICT IN INCLUDING INDEXES, TRY COMPOSITE INDEXES INSTEAD OF MULTIPLE INDIVIDUAL INDEXES.
November 23, 2007 at 9:17 am
Hi Raghavendra,
Great Article.
How do you actually spawn a new thread? I have to implement something like this and this has given me some valuable information. I would like to thank you for this but before I begin I would like to perform some testing and I can't seem to find any code that relates to spawning new threads.
Thanks once again and I hope I am not causing you any unneccasary hassle.
M
December 16, 2008 at 10:04 am
Yes i would like to know this too!
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply