March 4, 2009 at 10:06 am
Matt Miller (6/16/2008)
Having done some amount of testing on this - while this scenario is certainly convenient in some scenarios, it has a tendency to perform on par with CSQ's, so it will suffer rather severely on large datasets. It unfortunately just seems to be a cleaner way to write a Correlated sub-query, but with the same kinds of perf challenges the CSQ's had (it forces row-by-row evaluation, the logical reads tend to get out of control, etc...)
Can someone comment on the veracity of Matt's claims?
Or perhaps Matt, do you have evidence (tests we can run, etc.)?
Thanks, 😀
Paul DB
March 4, 2009 at 10:13 am
Paul DB (3/4/2009)
Matt Miller (6/16/2008)
Having done some amount of testing on this - while this scenario is certainly convenient in some scenarios, it has a tendency to perform on par with CSQ's, so it will suffer rather severely on large datasets. It unfortunately just seems to be a cleaner way to write a Correlated sub-query, but with the same kinds of perf challenges the CSQ's had (it forces row-by-row evaluation, the logical reads tend to get out of control, etc...)Can someone comment on the veracity of Matt's claims?
Or perhaps Matt, do you have evidence (tests we can run, etc.)?
Thanks, 😀
There's certainly no reason NOT to believe. It's clearly syntax to subset per row (table function) which is what a correlated sub-query is. The OP merely removed the table function.
You will let us know when you compile your test results.
March 10, 2009 at 11:02 pm
Thank you for the article. Believe it or not, I had a problem today, which I solved using your method!
March 10, 2009 at 11:06 pm
wow...that's great news.
whole reason for the article....Mission accomplished !
🙂
GAJ
Gregory A Jackson MBA, CSM
August 12, 2009 at 11:49 am
Thank You,
I have used first time "Outer Apply" Clause. It seems to be it is very expensive than regular Join with Drive Table. It totally make sense to make a reference from out query to inner query so we dont scan all rows.
But According to Query Analyzer it seems to be it 26 % versus 74%.
---
21%
-----
SELECT T.CO, T.TR, S.rowNo, S.ST, T.PC, T.ST, T.ET, S.AATime AS S_Actual,
DRV_Seg.AATime AS derivedtbl_1_Actual
FROM Table1 AS T INNER JOIN
Table2AS S ON T.TR = S.TR AND S.CO = T.CO LEFT OUTER JOIN
(SELECT CO, TR, rowNo, AATime
FROM NADataSource.Segment
WHERE (ST = '1')
AND (LEFT(PC, 1) 'B') AND (PC 'lhe')
AND (PC 'lhy')) AS DRV_Seg ON
S.TR = DRV_Seg.TR AND DRV_Seg.CO = S.CO AND DRV_Seg.rowNo = S.rowNo
WHERE (T.CO = N'338') AND (T.TR = '9935')
ORDER BY S.rowNo
----------
----61%
-----------
SELECT T.CO, T.TR, S.rowNo, S.ST, T.PC, T.ST, T.ET, S.AATime AS S_Actual,
DRV_Seg.AATime AS derivedtbl_1_Actual
FROM Table1 AS T INNER JOIN
Table2AS S ON T.TR = S.TR AND S.CO = T.CO
OUTER APPLY
(SELECT CO, TR, rowNo, AATime
FROM Table2S1
WHERE S.TR = S1.TR
AND S.CO = S1.CO
AND S1.rowNo = S.rowNo
AND (ST = '1')
AND (LEFT(PC, 1) 'B')
AND (PC 'lhe')
AND (PC 'lhy')
)AS DRV_Seg
WHERE (T.CO = N'338') AND (T.TR = '9935')
ORDER BY S.rowNo
------Cheapest one
---18%
SELECT T.CO, T.TR, S.rowNo, S.ST, T.PC, T.ST, T.ET, S.AATime AS S_Actual,
S1.AATime AS derivedtbl_1_Actual
FROM Table1AS T
INNER JOIN
Table2 AS S ON T.TR = S.TR AND S.CO = T.CO
LEFT OUTER JOIN
Table2 AS S1 ON T.TR = S1.TR AND S1.CO = T.CO
AND s1.ST = '1' and S.rowNo = S1.rowNo
WHERE (T.CO = N'338') AND (T.TR = '9935')
ORDER BY S.rowNo
Any body explain me what is better approach?
Viewing 5 posts - 46 through 49 (of 49 total)
You must be logged in to reply to this topic. Login to reply