January 23, 2008 at 5:52 am
Comments posted to this topic are about the item TSQL LAB 2 - Writing Conditional WHERE Clauses
.
January 23, 2008 at 8:37 am
Jacob - it seems the actual article got "et" (southern for "eaten") by SSC. There's nothing other than the title, then the footer links...
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
January 23, 2008 at 8:56 am
Hi Matt,
We are getting this fixed;
thanks
Jacob
.
January 24, 2008 at 9:33 am
Restored it !
.
February 26, 2008 at 10:33 pm
Jacob,
Firstly thanks on a great article, I always like to learn new ways to skin a cat.
I thought it important to advise of this note I found in books online re using the sp_executesql proc
"If object names in the statement string are not fully qualified, the execution plan is not reused."
Especially if you want to use this option to leverage the execution plan for performance gains.
Thanks
Daniel
February 26, 2008 at 11:19 pm
I agree that sp_executesql is better than EXEC() in many ways. Other than reusing the execution plan, it helps to avoid SQL Injection. I have mentioned it in one of the sections in the article.
One advantage with EXEC is that it can take very long strings. It is not limited to 8000 characters. You can concatenate two or more 8000 character strings and execute them using EXEC.
.
February 27, 2008 at 2:09 am
Jacob,
I have read and enjoyed the article. As you state in terms of performance not all versions are the same. In particular it is worth noting that using the compiled SQL options of Column = ISNULL(@column, column) and those similar has the side effect of blowing the optimiser. As a result a table scan is the most likely execution plan. This makes sense in that what you are asking the optimser to do is to look for specific records where the variable has a value or all records where is does not.
Rick Halliday
February 27, 2008 at 2:10 am
The examples shown all use simple comparisons. I implemented a more complex requirement, which is not that uncommon, to match a date in a date range.
The situation is that a start date and an end date may be supplied. Both dates are optional. This requires four possible actions:
1. Neither date is supplied - omit from WHERE clause
2. Only the start date is supplied - Use >= comparison operator after ensuring that the time element has been removed
3. Both dates are supplied - Use BETWEEN operator after ensuring that the time element has been removed from both dates
4. Only the end date is supplied - invalid combination rejected in validation before calling the SP
I validate all the parameters in my code to protect against SQL injection before calling the SP which has around 20 possible parameters (it's a search routine called from a web site). It also supports wild cards in some value strings. For user convenience * is used as the wild card in preference to the less widely understood % so I also have to replace * with % and the presence of * or % detemines when to use LIKE rather than =.
Furthermore, the fields returned by the SELECT statement and the tables in the FROM section change depending on the value of one of the mandatory parameters.
Currently I generate dynamic SQL which I EXECUTE and have not come across any performance issues although it's not a heavily used web site.
I will experiment with sp_executesql but I don't expect to notice any real difference.
Is there any information on the performance of string concatenation in TSQL? If it's not good I could simply build the strings in my .NET code using a StringBuilder object and call sp_executesql directly from my code. This would move the bulk of the preparation work to the client (in this case the Web Server) rather than the database server.
February 27, 2008 at 2:20 am
Your example of using sp_executesql is probably the best way to go about doing complex dynamic SQL queries. You build up the overall query with appropriate blocks of SQL code with parameter placeholders and then safely execute the code via sp_executesql. This way query plans can be reused for common combinations of search parameters and, most of all (particularly for web sites which are the common source of such dynamic search requirements for untrusted end users) you avoid sql injection.
Great article. I used to point people to the articles to which you referred at the end, which I will still do, but, for a brief account, I'll point them at yours too 🙂 You should probably add to the article to indicate, as someone else suggested, the poor performance that would result from things such as functions in where clauses.
February 27, 2008 at 2:23 am
The problem that I think I've seen with the @ReorderPoint IS NULL OR ReorderPoint > @ReorderPoint is that you execute the sproc without the parameter, an execution plan gets cached, and, if the next time you run it you supply the parameter it then uses a terrible execution plan. I know you could get around this with a recompile query hint but the over head that introduces is quite high. Anyone had any experience getting around this?
February 27, 2008 at 2:41 am
Sadly this is not the result of a cached execution plan but rather the way the optimiser interprets the query. The only wa to get the optimiser to build a good execution plan is to use dynamic SQL.
As Jacob advocates the sp_ExecuteSQL is presently the best solution.
There does seem to be another clear differentiation between EXEC and sp_ExecuteSQL. That being that the parameters in sp_ExecuteSQL have to be parameters and are not placeholders. Ths is a good thing as it prevents SL injection. Therefore, you cannot do the following
declare @sql nvarchar (1000), @where nvarchar(100), @ParmDefinition nvarchar(50)
select @sql = N'select * from information_schema.tables where @where', @where = 'TableName LIKE ''Test%'''
SET @ParmDefinition = N'@Where nvarchar'
exec dbo.sp_executesql @sql, @ParmDefinition, @where = @where
Whereas with EXEC you could do
EXEC (@sql + @where)
February 27, 2008 at 3:40 am
Andrew Hancox (2/27/2008)
The problem that I think I've seen with the @ReorderPoint IS NULL OR ReorderPoint > @ReorderPoint is that you execute the sproc without the parameter, an execution plan gets cached, and, if the next time you run it you supply the parameter it then uses a terrible execution plan. I know you could get around this with a recompile query hint but the over head that introduces is quite high. Anyone had any experience getting around this?
The best way around this is to start using different stored procedures. Generally speaking, any large amount of conditional logic within a stored procedure can render it ineffective (from a performance/caching point of view).
So you do something like this:
IF @ReorderPoint IS NULL
EXEC StoredProcA
ELSE
EXEC StoredProcB @ReorderPoint
Naturally this isn't practical if your conditional logic is complex as you could very quickly end up with numerous conditional paths. But it's the most effective way I can think of.
February 27, 2008 at 3:44 am
This seems to be the route we're going down but we're trying to develop a tool to do smart code generation since we'd end up creating an awful lot of sprocs.
February 27, 2008 at 3:50 am
Something that I have played with and works quite nicely is creating a temporary SP.
Define you proc in a sql variable
DECLARE @sql nvarchar(4000), @TableName nvarchar(100), @Schema nvarchar(100)
SELECT @Schema = 'dbo'--, @TableName = '%Objects%'
SET @sql = 'CREATE PROCEDURE #Test
@TableName nvarchar(100),
@Schema nvarchar(100)
AS
select *
from information_schema.tables
where 1 = 1'
IF @TableName IS NOT NULL
SET @sql = @sql + ' and Table_Name like @TableName'
IF @Schema IS NOT NULL
SET @sql = @sql + ' and Table_Schema like @Schema'
EXEC(@sql)
EXEC #Test @TableName, @Schema
DROP PROCEDURE #TEST
One advantage is that you do not have to cast the data types and can just pass the variables through. What i did find interesting is that the lifespan of the temporary procedure extended beyond the EXEC statement which would not happen with a temporary table.
February 27, 2008 at 4:42 am
Jacob - The information given in this article is certainly a food for thought and people can use the option whatever relevant to their context. I have used 'If Exists' clause in one of our requirements to avoid dynamic query, it may not be suitable in all cases. Notice that the 'Where' condition is being used only if the given condition is not true.
--@groups is comma separated values
--SplitStrings - function returns a table
CREATE PROCEDURE [dbo].[p_Test]
@useridvarchar(30),
@groups varchar(200)
AS
BEGIN
SET NOCOUNT ON;
IF EXISTS(SELECT StringId FROM SplitStrings(@groups) WHERE StringId in
('group1','group2','group3','group4'))
SELECT a.col1,a.col2,b.col1,b.col2
FROM TabA as a inner join TabB as b on a.id=b.id
ORDER BY a.col1,a.col2
ELSE
SELECT a.col1,a.col2,b.col1,b.col2
FROM TabA as a inner join TabB as b on a.id=b.id
WHERE (a.userid=@userId OR a.delegateUserId=@userId)
ORDER BY a.col1,a.col2
END
I feel dynamic queries are handy/best for 'Search' procedures where few parameters are mandatory and few are not.
Thanks,
Rishi
Viewing 15 posts - 1 through 15 (of 106 total)
You must be logged in to reply to this topic. Login to reply