November 20, 2015 at 7:03 am
Hello All
I have SQL Server 2012 SP2 CU7
A very simple query
2 tables, the first with 5 million rows the second with 1 million rows
the range at the select will return 111 rows
version 1)
SELECT qd.qr_id as qrId,
mq.MainqrId
FROM dbo.MasterQrRelation mq
INNER JOIN QR_Detail qd
ON qd.qrnid = mq.MainqrId
where mq.qrNId >= 3220800160685210001
AND mq.qrNId <= 3220800160685250004
|--Nested Loops(Inner Join, OUTER REFERENCES:([mq].[MainqrId]))
|--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1010], [Expr1011], [Expr1012]))
| |--Merge Interval
| | |--Concatenation
| | |--Compute Scalar(DEFINE:(([Expr1005],[Expr1006],[Expr1004])=GetRangeWithMismatchedTypes((3220800160685210001.),NULL,(22))))
| | | |--Constant Scan
| | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(NULL,(3220800160685250004.),(42))))
| | |--Constant Scan
| |--Index Seek(OBJECT:([testdb].[dbo].[MasterQrRelation].[MasterQrRelation:qrnid] AS [m]), SEEK:([m].[qrNId] > [Expr1010] AND [m].[FbNId] < [Expr1011])) ORDERED FORWARD)
|--Clustered Index Seek(OBJECT:([testdb].[dbo].[QR_Detail].[QR_Detail:PrimaryKey] AS [qd]), SEEK:([qd].[qrNId]=[testdb].[dbo].[MasterQrRelation].[MainqrId] as [mq].[MainqrId]) ORDERED FORWARD)
As you can see it is doing 2 index seeks, with a cost of 50% at each table and performing very well
Version 2)
declare
@val1 bigint,
@val2 bigint
set @val1 = 3220800160685210001
set @val2 = 3220800160685250004
SELECT qd.qr_id as qrId,
mq.MainqrId
FROM dbo.MasterQrRelation mq
INNER JOIN QR_Detail qd
ON qd.qrnid = mq.MainqrId
where mq.qrNId >= @val1
AND mq.qrNId <= @val2
As soon as I introduce the variables it will do a index scan (with the wrong index) at the second table with a 86% of the overall cost and tons of reads,
if I force the the use of the proper index it will do a scan against it and perform poorly
if I use the forceseek option at the second table it will do 2 seeks but the cost will by at 97% at the second table
if I add option(recompile) it will create the same plan as the first query
All indexes were recreated so, this is not happening due to old statistics or fragmentation
Any ideas?
Thanks in advance
November 20, 2015 at 7:21 am
First thing that comes to mind is parameter sniffing and local variables, suggest you read up on the subject, plenty of good articles around covering this.
😎
November 20, 2015 at 7:29 am
Thanks
I know that I can use parameter sniffing in here, but the overall problem if that for sure we have hundreds, if not thousands of queries that are doing stuff like this and for sure I won't go to each one of those just to fix something that is behaving like a bug, I am looking for a general solution that can actually be applied at database or table level as to cover all those.....
November 20, 2015 at 7:54 am
ricardo_chicas (11/20/2015)
ThanksI know that I can use parameter sniffing in here, but the overall problem if that for sure we have hundreds, if not thousands of queries that are doing stuff like this and for sure I won't go to each one of those just to fix something that is behaving like a bug, I am looking for a general solution that can actually be applied at database or table level as to cover all those.....
There really isn't a server wide "fix my parameter sniffing problem" setting that can be applied. You are going to have to fix your queries.
Here is an excellent set of articles about parameter sniffing. http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing/[/url]
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
November 20, 2015 at 8:27 am
Yeah, it is just sad that we define a clustered index, therefore ordered, with two variables that are of the same data type withing the scope of the query, and still sql is completely unable to find the proper plan, I do not consider it "my parameter sniffing" problem, this has been a issue for 10 years already or more and MS is doing nothing about it... I just can't be the only one that feels the same about it
November 20, 2015 at 9:07 am
ricardo_chicas (11/20/2015)
Yeah, it is just sad that we define a clustered index, therefore ordered, with two variables that are of the same data type withing the scope of the query, and still sql is completely unable to find the proper plan, I do not consider it "my parameter sniffing" problem, this has been a issue for 10 years already or more and MS is doing nothing about it... I just can't be the only one that feels the same about it
The optimizer cannot evaluate the local variables against any statistics and therefore has to use rough estimates, introducing the variables is not doing it any favours. Is there any particular reason for doing that? Could you change the queries into proper parameterized code, i.e. the code posted could easily be written as an inline table value function.
😎
November 20, 2015 at 12:31 pm
It's lack of parameter sniffing. The optimiser can't sniff the values of variables and hence assumes a default row count estimation.
Usually fixed by either OPTION(Recompile) or OPTION(Optimize for ...)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply