October 11, 2010 at 12:32 pm
I would like to get the row count of the source table (Oracle) and compare it to row count of the destination table (SQL Server 2008). I tried it with the EXECUTE SQL task containing a SELECT COUNT (*) FROM [source table] and a similar EXECUTE SQL task for the destination table.
However, I am wondering if I should replace those two tasks with a Data Flow task containing Row Count transforms. If I am dealing with source and destination tables containing a million or more rows, would I get better performance from the Row Count transformations or the Execute SQL tasks? Which one would be a best practice?
October 11, 2010 at 9:57 pm
Row Count..the package looks more modular...
Raunak J
October 12, 2010 at 8:02 am
I think a simple SELECT COUNT(1) FROM [Source Table] will be faster than a whole data flow that reads everything into memory to just count the rows. But it is always nice to experiment. Build two packages. One with the SQL solution, one with the data flow solution, and let us know which one is fastest.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
October 12, 2010 at 6:07 pm
Can I use the SELECT COUNT(1) to populate a variable from within the Data Flow task?
October 12, 2010 at 9:59 pm
imani_technology (10/12/2010)
Can I use the SELECT COUNT(1) to populate a variable from within the Data Flow task?
The statement will only return you a count of rows...create a variable having a scope of package and you may use it in DFT...what is it you wish to implement??
Raunak J
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply