Dynamic Parameter in SP

  • Hi,

    I've got two different TSql for implementing dynamic parameter in SP. Based on the input parameter, the TSql will do any filtering to the SELECT statement only if the parameter value is not NULL, otherwise there's nothing to filter.

    Of these two, which one is much better in term of procedural cache and performance issue?

    SP1:

    Create Procedure dbo.Foo

    (@P1 int = null, @P2 char(5) = null,..)

    As

    SELECT * FROM MyTable

    WHERE Field1=@P1 OR @P1 IS NULL AND

    Field2=@P2 OR @P2 IS NULL ...

    ========================================

    SP2:

    Create Procedure dbo.Foo

    (@P1 int = null, @P2 char(5) = null,..)

    As

    SELECT * FROM MyTable

    WHERE Field1=Coalesce(@P1,Field1)

    AND Field2=Coalesce(@P2,Field2)...

    Thanks in advance

    Hendry

  • Hi i runned both statements in qa and the execution plan was the same but the statement with coalesce was a little faster. I don't know why and probably this depends on the tables used and the server etc.

  • I always test any solutions I come up with and compare. The Coalesce may have been a little faster because a reuseable plan was available so not really a fair fight, you need to test on a nonproduction server where you can use DBCC FREEPROCCACHE or DBCC FLUSHPROCINDB to clear the execution plans from memory. Check out the details in this article http://www.sqlservercentral.com/columnists/RDyess/cache.asp . Keep in mind, even when you find a solution you may look back later and find something else to optimize.

  • Klaas-Jan, Yeah I found the Coalesce version a bit faster too using QA

    ..but I'll do what Antares686 suggests. I need to test it. This is really what a Guru way that I should keep in mind (Test & See)

    Thanks all for your helpful feedback

    Hendry

  • They are both bad because you cannot use indexes you should use

    Create Procedure dbo.Foo

    (@P1 int = null, @P2 char(5) = null,..)

    As

    IF @P1 IS NULL AND @P2 IS NULL

    SELECT * FROM MyTable

    ELSE

    IF @P1 IS NULL

    SELECT * FROM MyTable

    WHERE Field2=@P2

    ELSE

    IF @P2 IS NULL

    SELECT * FROM MyTable

    WHERE Field1=@P1

    ELSE

    SELECT * FROM MyTable

    WHERE Field1=@P1

    AND Field1=@P2

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • simonsabin, thanks alot for your alert.

    When I rerun the tsql under QA, I somehow noticed that your version was really making use of the indexes! Hmm.., I confuse why this couldn't be happened for my two previous tsql.

    However, I'm in doubt that simonsabin's SP will always get cached. I've ever read an article at somewhere else (I forgot it) which told me to avoid dynamic execution path (if) and dynamic tsql (exec) in a SP otherwise the SP will never get cached, it'll always be recomplied each time it's executed.

    So based on that theory, simonsabin's version of the SP will make full use of index but never get cached which can hurt overall sp's performance?

    Is this true? please clarify me.

    Hendry

  • This is not dynamic sql so you do get caching. There is an issue where you have an SP and you pass it very different values for the same parameter. This cached plan might not work well with the second different value.

    However in your case the query plan is specific to each query and so you get the best of all worlds.

    The reason your query does not use indexes is that using OR means the optimiser does not know which index to use.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • Another way you can get better saved execution plans with what Simon states is to create multiple Procedures, 1 for each possible SELECT statement.

    Then create a main Procedure with logic to call the others using the same if statement structure.

    Ex.

    Create Procedure dbo.ip_Foo;1

    (@P1 int = null, @P2 char(5) = null,..)

    As

    IF @P1 IS NULL AND @P2 IS NULL

    EXEC dbo.ip_Foo;2

    ELSE

    IF @P1 IS NULL

    EXEC dbo.ip_Foo;3 @P2

    ELSE

    IF @P2 IS NULL

    EXEC dbo.ip_Foo;4 @P1

    ELSE

    EXEC dbo.ip_Foo;5 @P1, @P2

    GO

    CREATE PROCEDURE dbo.ip_Foo;2

    As

    SELECT * FROM MyTable

    GO

    CREATE PROCEDURE dbo.ip_Foo;3

    @P2 char(5)

    As

    SELECT * FROM MyTable

    WHERE Field2=@P2

    GO

    CREATE PROCEDURE dbo.ip_Foo;4

    @P1 int

    As

    SELECT * FROM MyTable

    WHERE Field1=@P1

    GO

    CREATE PROCEDURE dbo.ip_Foo;5

    @P1 int, @P2 char(5)

    As

    SELECT * FROM MyTable

    WHERE Field1=@P1

    AND Field1=@P2

    GO

    ;1 and so on is the way you can group Procdures so that it shows as one in EM and when opened all show in the window. Also you can DROP all by using the name without the ;n number. Just be carefull when making changes as it is easy to overwrite the wrong one but I prefer this method to others.

    Also, in this scenario each Procedure stores it's own execution plan making it more optimized although a bit more code to do.

  • simonsabin and Antares686, Thanks very much. Both of you enlightened me!

    Hendry

  • You need to be aware that having multiple statements that are executed differently based on conditional logic generates the cached execution plan based on the 1st call to the proc. If the queries are dramatically different based on the parms, you will get crummy performance from many of your calls. As suggested, having one "master" proc that determines which of its 'slaves' to call will give you consistent performance for all possible input values.

    For my money, we tend to use the coalesce() approach because it's less maintenance. You need to determine speed is of more import than maintainability or vice versa and use that approach for all procs in your app.

  • GOD,,,, What is Dynamic in this sp .

    I thought dynamics means no of arguments to sp will '

    change.

  • I can't iterate this more using coalesce in the manner shown means indexes can't be used, if you are doing this on large tables, and this is the primary filter then not only will it take longer but wil also impact scalability.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • DYnamic mean can be change at runtime. In the case the variables are dynamic in that the compiled code expects these twos values to change and needs that input from the user. Static is when nothing changes.

    As for coalesce I am pretty sure it can use an index. The execution plan shows it does when I run a similar type of query to the example.

  • I have some experience with this too and i found out that when you are using several paremeters it is better to build te string Dynamically and use SP_Executesql.....

    Some Advantages :

    1. Very Easy to maintain

    2. ALWAYS a good query plan

    3. Add as much optional parameters as you want,

    the resulting query will always be an exact mach of what YOU want.....

    4. Beats a COALESCE by FAR..... ( 10 to 30 times faster )

    Here is the example......

    CREATE PROCEDURE dbo.OptionalParemeters

    (

    @Field1int = null,/* optional variable */

    @Field2varchar(64) = null,/* optional variable */

    @Field3varchar(16) = null,/* optional variable */

    )

    AS

    BEGIN

    Declare @ColumnStr varchar(600)

    Declare @FromStr varchar(100)

    Declare @JoinStr varchar(500)

    Declare @WhereStr varchar(2000)

    Declare @SQLStatement nvarchar(3000)

    -- Define the columns to display

    Set @ColumnStr = '

    m.Field1,

    m.Field2,

    m.Field3,

    J.FieldA,

    J.FieldB

    '

    -- Define from table

    Set @FromStr = '

    Table1 m

    '

    -- Define the joined tables

    Set @JoinStr = '

    Left join Table1 J on M.ID=J.ID

    '

    -- Empty the where clause

    Set @WhereStr = ''

    if @Field1 is not NULL -- Is this parameter filled ?

    begin

    if rtrim(@WhereStr) <> '' set @WhereStr = @WhereStr + ' and ' -- do we have to add ' AND '

    set @WhereStr = @WhereStr + ' m.Field1 =' + str(@pv_field1)

    end

    if @Field2 is not NULL -- Is this parameter filled ?

    begin

    if rtrim(@WhereStr) <> '' set @WhereStr = @WhereStr + ' and ' -- do we have to add ' AND '

    set @WhereStr = @WhereStr + ' m.Field2=' + char(39)+ @Field2 +char(39)

    end

    if @Field3 is not NULL -- Is this parameter filled ?

    begin

    if rtrim(@WhereStr) <> '' set @WhereStr = @WhereStr + ' and ' -- do we have to add ' AND '

    set @WhereStr = @WhereStr + ' m.Field2=' + char(39)+ @Field3 +char(39)

    end

    if rtrim(@WhereStr) <> '' set @Wherestr = +' Where '+ @WhereStr

    set @SQLStatement = 'Select '+ @ColumnStr + ' From ' + @FromStr + ' ' + @Joinstr + @WhereStr

    EXEC sp_executesql @SQLStatement

    END

  • I am still trying to understand the statement using COALESCE will not utilize and index

    I test this under with the following script

    SET SHOWPLAN_TEXT ON

    GO

    DECLARE @dt datetime

    SET @dt = '12/05/2002'

    select count(*) from tbl_eventdata where timeof > coalesce(@dt,'12/06/2002')

    set @dt = NULL

    select count(*) from tbl_eventdata where timeof > coalesce(@dt,'12/06/2002')

    select count(*) from tbl_eventdata where timeof > '12/06/2002'

    GO

    SET SHOWPLAN_TEXT OFF

    GO

    And these were the outputs I got.

    StmtText

    -------------------------------------------------------------------------------------

    DECLARE @dt datetime

    SET @dt = '12/05/2002'

    select count(*) from tbl_eventdata where timeof > coalesce(@dt,'12/06/2002')

    set

    (2 row(s) affected)

    StmtText

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |--Stream Aggregate(DEFINE:([Expr1002]=Count(*)))

    |--Clustered Index Seek(OBJECT:([Score].[dbo].[tbl_EventData].[IX_tbl_EventData]), SEEK:([tbl_EventData].[timeof] > If ([@dt]<>NULL) then [@dt] else Dec 6 2002 12:00AM) ORDERED)

    (2 row(s) affected)

    StmtText

    ---------------------------------------------------------------------------------

    @dt = NULL

    select count(*) from tbl_eventdata where timeof > coalesce(@dt,'12/06/2002')

    (2 row(s) affected)

    StmtText

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |--Stream Aggregate(DEFINE:([Expr1002]=Count(*)))

    |--Clustered Index Seek(OBJECT:([Score].[dbo].[tbl_EventData].[IX_tbl_EventData]), SEEK:([tbl_EventData].[timeof] > If ([@dt]<>NULL) then [@dt] else Dec 6 2002 12:00AM) ORDERED)

    (2 row(s) affected)

    StmtText

    ------------------------------------------------------------------

    select count(*) from tbl_eventdata where timeof > '12/06/2002'

    (1 row(s) affected)

    StmtText

    ---------------------------------------------------------------------------------------------------------------------------------------------------------

    |--Stream Aggregate(DEFINE:([Expr1002]=Count(*)))

    |--Clustered Index Seek(OBJECT:([Score].[dbo].[tbl_EventData].[IX_tbl_EventData]), SEEK:([tbl_EventData].[timeof] > Dec 6 2002 12:00AM) ORDERED)

    (2 row(s) affected)

    It looks like it is using the exact same index in each case. The only unusual thing I see in the iconic version of the Execution plan output is that with COALESCE I see streams in parallel and the other I don't.

    As for sp_executsql as long as there is a good reusuable plan it will work great but if you alter the query too much it will not reuse the plan or may use a bad plan. It is far better than EXEC is thou, as EXEC will never reuse a stored plan.

    If however you know you have a limited number of options and you can program a master/slave style SP then you will see the best performance as each can save and use it's own Stored Execution plan guaranteed. sp_executsql should be saved for high number or possibility solutions that make programming for them too complicated to implement.

Viewing 15 posts - 1 through 15 (of 25 total)

You must be logged in to reply to this topic. Login to reply