February 29, 2008 at 4:49 am
Conditional where clauses imo are easily implemented using optional parameters in stored procs with an OR in the where clause for each field you're checking with the optional parameter.
i.e.
Create Procedure MySP ( @MyParam int = Null) as
SELECT tableID, stuff, things
FROM aTable
WHERE (@MyParam is Null OR tableID = @MyParam)
AND /* anything else*/
GO
If the parameter isn't passed in, the first part of the statement (@MyParam is Null OR tableID = @MyParam) evaluates the whole statement to true and blanks that line out of the where clause, otherwise if you pass a value in, it filters the select statement.
February 29, 2008 at 9:14 am
DECLARE @sql varchar(100), @param1 int, @param2 varchar(50)
SET @param1 = 100
--SET @param2 = 'sys'
SET @sql = '
SELECT *
FROM [sysobjects]
WHERE 1 = 1'
IF @Param1 IS NOT NULL SET @sql = @sql + ' and id = ' + CONVERT(varchar(50), @Param1)
IF ISNULL(@Param2, '') <> '' SET @sql = @sql + ' and [name] LIKE ''' + @Param2 + '%'''
PRINT @sql
February 29, 2008 at 3:53 pm
Markus,
The issue is not around whether the various versions work as they all do but rather what the pro's and con's are of each version.
At a basic level it comes down to:
- compiled sql with permissions granted only to the stored procedure but potentially a poor execution plan;
- versus dynamic sql with a good execution plan but having to grant permissions to the undelying obect.
As per Erland and Jacob's artilcles sp_Executesql is currently the preferred method as it reduces the potential for sql injection and generates a good execution plan which can be reused. However, it requires providing the user with permissions which can be partly mitigated but using a view rather than the table itself.
Rick
March 5, 2008 at 1:31 am
Hello Jacob,
I thing that sp_executesql procedure has stmt parameter of type nvarchar(MAX) (yukon), so there is "no limit" for length of statement.
I prefer usage of sp_executesql. Sometime I have performance problem on datawarehouse systems, because cached execution plan is prepared for the first set of parameters sent to the procedure and is not optimized for all parameter combinations.
Usage of non-dynamic query is not the best way for big systems too. When two big tables are joined, there could be performance problem with condition "@param is null or column=@param".
have a nice day
Petr
March 23, 2008 at 11:37 pm
It is a good article. Thanks Jacob.......:)
August 12, 2008 at 2:01 pm
I agree that it is a good article. However, if you *have* to use EXEC for whatever reason, you can prevent SQL Injection by just replacing all of your single quotes with two single quotes as follows...
/*
EXECUTE ProductSearch1 NULL
EXECUTE ProductSearch1 'AR'
*/
CREATE PROCEDURE ProductSearch1
(
@ProductNumber VARCHAR(20)
)
AS
SET NOCOUNT ON
DECLARE @sql VARCHAR(MAX)
SET @sql = ' SELECT * FROM Production.Product '
IF @ProductNumber IS NOT NULL BEGIN
SET @sql = @sql + ' WHERE ProductNumber LIKE ''' +
REPLACE( @ProductNumber, '''', '''''' ) + '%'''
END
-- use the print statement for debugging
-- PRINT @sql
EXEC(@sql)
August 12, 2008 at 8:15 pm
I missed this article when it was first published... Nicely done Jacob! Not only was it a good article, but the followup posts are great, as well. You really got people interested in a postive way. Thanks for taking the time...
--Jeff Moden
Change is inevitable... Change for the better is not.
August 12, 2008 at 8:48 pm
Thank you Jeff!
.
August 29, 2008 at 1:28 am
Hi Jacob
It's good to see this topic addressed, as it is a common problem which always gets a healthy debate between developers.
However, I really think that your article should include some discussion on performance analysis. You seem to suggest that using the non-dynamic IsNull / Like method is better. It might be when it comes to readability, but in terms of performance its not very good at all.
I had this problem recently with a customer search stored proc, which accepts name, address, telephone, postcode, mobile etc - or any combination of the above. I rejected the dynamic SQL approach because of the poor execution plan and used the Like / IsNull approach.
After a delivery to live which brought down the entire call centre through my poor performing stored proc (oops), I decided to do some proper analysis on both methods. Neither turned out particularly good.
I did some more requirements gathering. It turns out that 90% of searches are done on post code and telephone number. I wrote these as straight queries. I then reverted to the dynamic SQL approach (which beat the Like/IsNull approach in terms of performance) to catch all other possibilities. So the code followed this logic
if (@Postcode is not null and all other parameters are null)
select * from customertable where postcode = @PostCode
if (@TelNo is not null and all other parameters are null)
select * from customertable where TelNo = @TelNo
else
dynamic SQL
Result: Call Centre back up and running - everyone happy.
I learned a lesson that day, which was something along the lines of:
It doesn't have to look pretty to work well.
August 29, 2008 at 1:59 am
Hi Nick. Thank you for the feed back. This is what I expected when I said 'I am expecting sql experts to jump in and share their views'.
It looks like ecah method has its own place. depending upon various factors, one approach may be found better than the other on a certain situation. I have some queries which I modified by using a dynamic SQL with sp_executesql. But again, it can vary from case to case.
There were a lot of points discussed in the comments of this article. I see many of them are very helpful. I would like to thank every one for expressing their views on this and would like to see more comments.
regards
Jacob
.
August 29, 2008 at 2:02 am
nick.wright (8/29/2008)
Hi JacobYou seem to suggest that using the non-dynamic IsNull / Like method is better.
Hi Nick,
Infact, i did not suggest that non-dynamic method is better. See my note in the summary "What I could make from the above two articles is that sp_executesql is the winner in most of the cases."
I would like to state once again that the performance can vary from case to case. So it may be a good idea to see the different options and test the performance factors based on a given table schema, data and relationships.
.
August 29, 2008 at 4:49 am
Jacob,
Great article. Glad Steve put this one in the latest newsletter as I would have missed it completely. Nice, thorough job! Thanks for taking the time to put together all the examples! Maybe a follow-up to talk about the performance impact of the different options? 😀
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
August 29, 2008 at 5:12 am
Thank you David. I will try to do that.
.
August 29, 2008 at 5:32 am
Nice article!
It showed some alternatives to techniques I've been using for years...
August 29, 2008 at 6:45 am
As a fairly new SQL developer, I have often struggled with creating dynamic queries. This article is useful because it collects several techniques in one place. Some of these I knew about and others I did not. I find the use of ISNULL is a D'oh moment for me....
However, I was required to create 3 field search functionality using partial matching. Here is what I developed using the example from the article:
CREATE PROCEDURE ProductSearch2
(
@ProductNumber VARCHAR(20),
@Color VARCHAR(10),
@ReOrderPoint INT
)
AS
SET NOCOUNT ON
SELECT *
FROM Production.Product
WHERE ProductNumber = COALESCE('%' + @ProductNumber + '%', ProductNumber)
AND Color = COALESCE('%' + @Color + '%', Color)
AND ReorderPoint = COALESCE('%' + @ReorderPoint + '%', ReorderPoint)
I could probably take advantage of the ISNULL function but that would create a lot of rework. Is there a reason I should not use COALESCE?
Viewing 15 posts - 31 through 45 (of 106 total)
You must be logged in to reply to this topic. Login to reply