August 3, 2004 at 9:12 pm
Why is it so?
I have a transaction table (6 million rows) with a clustered index on column HIDDateTime. Most reports select on a date range on this column. During checking the performance of a proc (see below) I noticed the proc runs much faster if the date range are litteral values rather than passed in as parameters. Obviously the date range must be paramertised but I was wondering if someone could through some light on the following behavior.
If the dates in the where clause are from parameters (example 1) then the query executes a clustered index scan and takes 50 seconds to run. But, if the date values are hard coded (example 2), SQL selects to use a clustered index seek and runs much faster.
Is there any way I can 'force' the use of seek rather than a scan? I'm presuming that here the term 'scan' means SQL is scanning the entire index.
TIA - Peter Jones
CREATE PROCEDURE dbo.stpTest
@FromDate As sql_Variant,
@ToDate As sql_Variant
AS
Declare @FromDate_LU As DateTime
Declare @ToDate_LU As DateTime
Set @FromDate_LU = Convert(DateTime,@FromDate,120)
Set @ToDate_LU = Convert(DateTime,@ToDate,120)
SELECT Columns....
FROM Tables...
------------------ WHERE CLAUSE 1
WHERE (T.HIDDateTime BETWEEN @FromDate_LU AND @ToDate_LU)
AND (T1.HIDRegradeDateTime BETWEEN @FromDate_LU AND @ToDate_LU)
Using this Where clause the clustered index on HIDDateTime is 'scanned' and the query takes 50 seconds to run.
------------------ WHERE CLAUSE 2
WHERE (T.HIDDateTime BETWEEN '2004-01-01' AND '2004-02-01')
AND (T1.HIDRegradeDateTime BETWEEN @FromDate_LU AND @ToDate_LU)
Using this Where clause the clustered index on HIDDateTime is 'seeked' and the query takes 15 seconds to run.
August 5, 2004 at 12:31 am
Looks to me like the problem might be with your Inner Join:
FROM Table t
Inner Join Table1 t1
On
T.HIDDateTime = T1.HIDDateTime
WHERE (T.HIDDateTime BETWEEN @FromDate_LU AND @ToDate_LU)
This might work a lot better for you.
Butch
August 5, 2004 at 12:58 am
Let me share some of my thoughts on this.
First of all I would like to admit that I am a bit clueless on this issue. . One reason is lack of details.
However, One possible reason could be this: When the proc is created, it is not saved with execution plan. Execution plan is saved on the very first execution. (When you restart the server the execution plan is lost!). If you first tried your proc with a big range (@Fromdate and @Todate are covering almost the entire table) the proc will decide to do a scan. but if you re run the query as an ad-hoc query (or a seperate sp) with values, sql server will have another plan for it.
To avoid this please re-compile the procedure. What Butch says too is an important thing to to consider.
Let me again remind that I may be wrong.
Cheers,
Prithiviraj Kulasingham
http://preethiviraj.blogspot.com/
August 5, 2004 at 1:38 am
This is a favourite topic of mine.
The compiler/optimiser isnt smart enough to trace your assignements. It doesnt know that @FromDate_LU is derived from @FromDate when it optimises the query. So with no idea how big a range is being specified in the where clause, it chooses table scan.
If you passed the exact same variable as a parameter and used in the query, then it would be considered in optimising the query. Then G.R.Prithiviraj's comments come into play. If you always pass a small range it should be OK, or else you might consider a "WITH RECOMPILE" clause on the procedure ( a little overhead to recompiling on every call).
August 5, 2004 at 4:09 am
Mama mia - brilliant advice.
Have changed my FromDate_LU to Convert(DateTime,@FromDate,120) and have supercharged the app. A 57 seconds query now runs in 5 seconds. A 4 minute report now runs in 12 seconds... unbloodybelievable.
I now have to change all my procs to do this and I will be a fair dinkum hero - but, rest assured, I will let everyone know I'm standing on the shoulders of others.
Thank you, thank you. Peter
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply