January 25, 2011 at 5:13 am
Hi,
What is the difference between the AND and WHERE Clause of a query.
Fr ex -
SELECT C.A
, D.A
, E.A
FROM SomeTable C
INNER JOIN AnotherTable D
ON C.B = D.B
AND C.D = smthng
INNER JOIN NextTable E
ON E.B = D.C
AND D.C = smth
&
SELECT C.A
, D.A
, E.A
FROM SomeTable C
INNER JOIN AnotherTable D
ON C.B = D.B
INNER JOIN NextTable E
ON E.B = D.C
WHERE D.C = smth
AND C.D = smthng
Both the queries above will fetch the same result ....but performance wise is there any difference.
January 25, 2011 at 5:49 am
On INNER JOIN they are equivalent queries and will be executed absolutely identically. On Outer Join they are not equivalent queries and can return different results.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 25, 2011 at 5:57 am
The best way to find potential performance differences between queries is to look at the estimated and actual plans using management studio.
January 26, 2011 at 1:51 pm
@joe:
Thank you for the detailed explanation without the "overhead" that triggered some of us to complain about from time to time. This post is of the kind I would point others to.
Thank you!!
January 27, 2011 at 3:43 pm
Good one Joe. A bit beyond the scope of the question posed by the OP, but I found it enlightening. I hadn't really thought it out why you use the column aliases in the ORDER BY, but it makes total sense when you think of it as a completed record set being passed to a cursor.
I used to code from top to bottom and never really had a problem with keeping things in order in my mind, but with 2008 Intellisense I've found using your method works better - I don't get the squiggly red lines so much.
Todd Fifield
January 29, 2011 at 10:09 am
Henry Treftz (1/25/2011)
The best way to find potential performance differences between queries is to look at the estimated and actual plans using management studio.
Oh... be careful, now. You simply cannot trust the execution plans 100%. The only thing you can really trust is SQL Profiler. I'll give you an example...
The following code produces two execution plans. Both the estimated and actual execution plans say that the top section of code will take 0% of the batch and that the second section will take 100%. Yet, when we run the code and look at the actual times, a whole different story is told...
/****************************************************************************************
Purpose:
This code demonstrates that the estimated and actual execution plans in SQL Server can
be 100% INCORRECT and that the execution plan should only be relied on to provide hints
as to what may be wrong with a query rather than an absolute indication. This code runs
in SQL Server 2005 only.
The code creates a temp table for 10 years worth of dates starting with 2000-01-01 using
two different methods. The first method uses a recursive CTE and the second method uses
a "Tally" table. The output of each method is directed to a "throw-away" variable to
take display delays out of the picture.
Please check both the actual and estimated execution plans and compare the % of batch.
Please see the following article on how to build a Tally table and how they can be used
to replace certain While Loops.
http://www.sqlservercentral.com/articles/T-SQL/62867/
****************************************************************************************/
SET NOCOUNT ON
--=======================================================================================
-- Recursive method shown by (Name with-held)
--=======================================================================================
PRINT '========== Recursive method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @DateVal DATETIME
SET @DateVal = '2000-01-01'
;with mycte as
(
select @DateVal AS DateVal
union all
select DateVal + 1
from mycte
where DateVal + 1 < DATEADD(yy, 10, @DateVal)
)
select @Bitbucket = d.dateval
from mycte d
OPTION (MAXRECURSION 0)
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
GO
--=======================================================================================
-- Tally table method by Jeff Moden
--=======================================================================================
PRINT '========== Tally table method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @StartDate AS DATETIME
SET @StartDate = '2000-01-01'
SELECT TOP (DATEDIFF(dd,@StartDate,DATEADD(yy,10,@StartDate)))
@Bitbucket = @StartDate-1+t.N
FROM dbo.Tally t
ORDER BY N
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
GO
If you don't already have the Tally Table the above code requires, please see the following for what it is, how to make one, and how it works...
http://www.sqlservercentral.com/articles/T-SQL/62867/
Why is there such a difference in % of batch and why is it so wrong? The answer is simple... in any form of recursion, only the first iteration is evaluated as part of the execution plan and the first section of code is a recursive CTE.
As a side bar, this is one of the reasons why so very many people think that Recursive CTE's are the bee's knees... the execution plans for them look like they'll even beat the Tally Table. In truth, Recursive CTE's are (usually, there are a few exceptions) nothing more than hidden RBAR. They're loops that make people feel good because there's no explicit loop to be seen. Internally, they can make a real mess of code.
Getting back to my original point, if all you do to evaluate performance is to look at estimated and actual execution plans, you're missing out on a whole lot. Use SQL Profiler or some other method to make your final decision as to which code to use.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 29, 2011 at 1:30 pm
Jeff Moden
Oh... be careful, now. You simply cannot trust the execution plans 100%. The only thing you can really trust is SQL Profiler. I'll give you an example...
Good call Jeff. The execution plans also won't show up hidden RBAR in the form of scalar functions that do SELECT statements on tables. I seem to recall Gail has an article about that one.
Todd Fifield
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply