Difference In Query Plans while Executing Same Query Independently and Within Stored Proc

  • Hi All,

    While working on some queries, I came across a scenario where the same query when running separately uses NonClustered Seek behaves differently when run from within Stored Proc.

    Within Stored Proc, it is using and Index Scan and affecting the performance.

    Kindly Help!!!

  • Without seeing the queries and the actual execution plans it's just guessing...

    So, my guess would be parameter sniffing.

    For a more detailed answer please provide both, the stand-alone query as well as ste stored procedure together with the actual execution plans for both.



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • With the limited information you've given us to go on here, maybe parameter sniffing issue?

    Take a look at this three part blog post: -

    Part 1[/url]

    Part 2[/url]

    Part 3[/url]


    --EDIT--


    Ah, I'm too slow, Lutz beat me to it 🙂


    Forever trying to learn
    My blog - http://www.cadavre.co.uk/
    For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/
    For better, quicker answers on SQL Server performance related questions, click on the following...http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Because of some security reasons I am not posting exact column names, but below is the the query that I am using.

    On Table1 , I have following Indexes:

    Nonclustered index on Table1 (BatchID,Formtype) include (Col1,Col2)

    I have a clustered index on Table1

    Clustered index on Table1(Col4)

    On Table2, I have following Indexes:

    NonClusteredIndex on Table2 (Col4) include (Col5)

    Clustered Index on Table2 (Col6) --Col6 is an identity column on Table2

    Below is the query where I pass @Batchid (int) and @ClaimType (nvarchar(2))

    #Scrubs is a temporary table

    INSERT INTO #scrubs (Col1,Col2,Col3)

    SELECT DISTINCT clmhdr.Col1 as Col1,clmhdr.Col2 as Col2,clm.Col3 as Col3

    FROM Table1 clmhdr (NOLOCK)

    JOIN Table2 clm (NOLOCK) ON clm.col4 = clmhdr.col4

    WHERE

    clmhdr.BatchId = @BatchID

    AND

    clmhdr.formtype = @ClaimType

    AND

    LEN(LTRIM(RTRIM(clm.Col5))) <> 2

    Running here I get a NonClustered Index seek on Table1 and Table2

    When this is enclosed in Procedure where I will be passing @BatchID and @Claimtype

    I see Table1 using NonClustered Index Seek but Table2 using NonClustered Index Scan, having high CPU and IO Cost.

    I am not sure if this is due to parameter sniffing, since the @BatchID and @Claimtype values are same in both i.e. query and SP

    Also they do not change during execution(For my testing purpose, I am repeatedly calling this SP with same values with recompile)

    Datatypes in the table and variables are also same.

  • It's parameter sniffing, more correctly the lack thereof (see part 2 linked above)

    Can't give you any advice since I have no idea which of those indexes are on what columns.

    Why nolock? Is the query unimportant enough that incorrect results are acceptable?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks All,

    It was parameter sniffing and I overcame by creating local variable within stored procedure and assigning values

    of @BatchId and @claimtype to @Batchid_local and @claimtype_local respectively.

    and used @batchid_local and @claimtype_local

    But kindly explain me how using local variable helps in overcoming parameter sniffing

  • er.mayankshukla (8/6/2014)


    But kindly explain me how using local variable helps in overcoming parameter sniffing

    See Part 1 linked above.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have gone through various articles but still not very clear.

    I would like to summarize my question below:

    When we are creating a Parameterized SP and running it for the first time with some values, at this time it is compiled and query execution plan is cached.

    The same values I was using when running it through SQL statements.

    The Part1 link says that using local variable, the compiler doesn't know the parameter and creates an average plan.

    What does it mean?

    At the execution time, the local variable also holds value.

    Wanted to know and exact reply to this

  • er.mayankshukla (8/6/2014)


    The Part1 link says that using local variable, the compiler doesn't know the parameter and creates an average plan.

    What does it mean?

    It means exactly that. At compile time, the variable doesn't have a value (it only gets a value at execution time), hence at compile time the optimiser doesn't know the value of the variable and hence can't use any value to generate a plan.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail,

    So if I am correct for parameterized proc. SQL creates a plan depending on table size as parameter values are not known till compile time.

    Since in my case Table 2 was a big table with 3million rows , SQL considered a scan and performed index scan.

    But how the difference came when I used local variables, even they are also not known till compilation time , but still SQL used a better optimized plan.

  • er.mayankshukla (8/6/2014)


    Thanks Gail,

    So if I am correct for parameterized proc. SQL creates a plan depending on table size as parameter values are not known till compile time.

    Since in my case Table 2 was a big table with 3million rows , SQL considered a scan and performed index scan.

    But how the difference came when I used local variables, even they are also not known till compilation time , but still SQL used a better optimized plan.

    Go read the articles again.

    Parameters are known at compile time, so the plan is based on the estimated row count for those values

    Variables are not known at compile time, so the plan is more generic

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • For best performance overall, I strongly suspect you need to change the clustered indexes on the tables. We would have to look at least at missing index and index usage stats to be sure, but it's a better than 50% chance the clustered indexes are not the best.

    Table1 / clmhdr:

    clustered index: ( BatchID, Formtype ) --might be unique, might not

    unique nonclustered index: ( Col4 )

    Table2 / clm:

    clustered index: ( Col4 )

    unique nonclustered index: ( Col6 )

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply