May 20, 2010 at 7:10 am
bjarup (5/20/2010)
Nice writeup.Maybe I am a bit off topic, but I would never allow null values to pass through to the select statement in a procedure if the tables underneath do not allow nulls. This should be handled in the front end application og by using non null default values in the parameters.
Might I ask how you handle stored procedures with optional parameters then?
I have one developer who keeps sending me -1 values when he doesn't care. Until one day for some reason we discovered values of -1 in the table. Now everyody uses NULL when a particular parameter call to a stored proc isn't necessary in this use case.
Search type stored procs almost always have more parameters than will be used in a single invocation of the stored proc. Because another invocation might use parameters D,E,F when the first call used parameters A,B,C.
Other than the injection approach (ick gack) either using the fieldname = @parameter OR @parameter IS NULL approach, or using parameterized dynamic SQL, are the only two semi-safe ways to do this that I know of.
Yes the parameterized approach puts several different plans into the plan cache, but those with common usage patterns tend to remain in the cache, and the others expire out of it. Meaning when the next use of the procedure is invoked that uses the most common invocation of the procedure, it will already be in the cache. Yayz for the win.
May 20, 2010 at 7:20 am
What about security?
If you use stored procedures as a security method, this approach will fail. Some applications use only stored procedures for DB access and completely remove all access to the base tables. But the moment that you insert dynamic SQL into a SP in the fashion displayed in the article, the SP ceases to rely only upon the execute permissions and must check the DB for SELECT permissions against the table which is referenced in the dynamic SQL. In some environments this is unacceptable.
EDIT: Never mind, Gail posted a possible method of dealing with this in the referenced blog.
May 20, 2010 at 7:32 am
Grant Fritchey (5/20/2010)
I'm sorry to disagree with you here, but dynamic queries do get cached. And if you build them using the parameterized approach, they can get reused too. I would say that using dynamic SQL should be an edge case, but to suggest that it's "never" a possible solution is taking away a useful tool.
I'm not proud to admit it, but I've got this sort of parameterized dynamic SQL everywhere. Requirements are that the stored procs have optional parameters to enable multiple different searches on the data, without writing a stored proc for every test case and forcing the program to invoke the appropriate one.
It's ok. In the CRUD world of things... CUD are all straightforward, just 25% of my queries are dynamic parameterized ones... 🙂
May 20, 2010 at 7:33 am
I edited the post above when I read Gail's blog, but even that isn't a great solution in toto once I started testing it. You CAN use EXECUTE AS to avoid granting base table checks, but the problem with using EXECUTE AS is that you lose the ability to audit. When you execute under a different context, the activity is logged as that context, and if you had to know exactly who touched the data using that proc, at least for the duration of that call, you'd lose the user name. I suppose you could change the context multiple times within the proc to accomplish this, but this assumes that the SQL is being written by a savvy SQL coder (and in my experience, lots of code is written by developers who know just enough to be dangerous).
So I guess I am saying that while dynamic SQL may have its place, you'd have to be careful how often that is done; it still shouldn't be a primary data access strategy.
May 20, 2010 at 7:34 am
mtassin (5/20/2010)
Grant Fritchey (5/20/2010)
I'm sorry to disagree with you here, but dynamic queries do get cached. And if you build them using the parameterized approach, they can get reused too. I would say that using dynamic SQL should be an edge case, but to suggest that it's "never" a possible solution is taking away a useful tool.
I'm not proud to admit it, but I've got this sort of parameterized dynamic SQL everywhere. Requirements are that the stored procs have optional parameters to enable multiple different searches on the data, without writing a stored proc for every test case and forcing the program to invoke the appropriate one.
It's ok. In the CRUD world of things... CUD are all straightforward, just 25% of my queries are dynamic parameterized ones... 🙂
I wouldn't overly sweat it. There are systems and circumstances where it makes sense. I'd still contend that it does not make sense most of the time.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 20, 2010 at 7:38 am
Jeff Mason-256568 (5/20/2010)
What about security?If you use stored procedures as a security method, this approach will fail. Some applications use only stored procedures for DB access and completely remove all access to the base tables. But the moment that you insert dynamic SQL into a SP in the fashion displayed in the article, the SP ceases to rely only upon the execute permissions and must check the DB for SELECT permissions against the table which is referenced in the dynamic SQL. In some environments this is unacceptable.
EDIT: Never mind, Gail posted a possible method of dealing with this in the referenced blog.
The way to not grant the user access to the tables is sign the procedures with a certificate. That way the user does not have access to the tables but the "certificate user" does.
May 20, 2010 at 7:38 am
Tim Cullen (5/20/2010)
One of the things I do during development of a stored procedure that includes dynamic SQL is instead of executing the statement I PRINT (http://msdn.microsoft.com/en-us/library/ms176047.aspx) the statement. From there you can review the structure to make sure it's what you actually, as well as perform other tasks like setting IO and TIME statistics on and read the estimated execution plan.
In those cases, you can use Application Roles to handle the security. Grant the app role rights to select, and then use sp_setapprole to change to a role that has SELECT permissions.
May 20, 2010 at 7:40 am
246747 (5/20/2010)
Jeff Mason-256568 (5/20/2010)
What about security?EDIT: Never mind, Gail posted a possible method of dealing with this in the referenced blog.
The way to not grant the user access to the tables is sign the procedures with a certificate. That way the user does not have access to the tables but the "certificate user" does.
Yah... that's even more cool. We're finally getting 2000 off of our production machines shortly and I look forward to nifty things like certificates.
May 20, 2010 at 7:55 am
Grant Fritchey (5/20/2010)
I wouldn't overly sweat it. There are systems and circumstances where it makes sense. I'd still contend that it does not make sense most of the time.
I used to, but I don't anymore. When it was pointed out that a stored procedure that we had with 8 distinct possible parameters would require me to write 255 stored procs to cover all the bases, I went searching for something better, and ran into Gail's article, which lead me down the road.
Folks still wonder why the stored procs I write run so fast... and I keep telling them that I learned it all here 🙂
May 20, 2010 at 8:40 am
Jeff Mason-256568 (5/20/2010)
I edited the post above when I read Gail's blog, but even that isn't a great solution in toto once I started testing it. You CAN use EXECUTE AS to avoid granting base table checks, but the problem with using EXECUTE AS is that you lose the ability to audit.
Or, you can sign the procedures. It's easy and very cool. The author of this article ends by saying he will explore that next time. For those that cannot wait, see Erland Sommarskog's site (the link has been posted several times).
Paul
May 20, 2010 at 9:01 am
If you rearrange each clause so that the test is
(@parameter is null or column = @parameter)
it won't ever look at the column values when the parameter is null. SQL Server uses "short circuit boolean evaluation" so it won't actually look at the "column = @parameter" when "@parameter is null" evaluates to true, because the result of "A or B" is known when A is true.
Whether SQL Server is smart enough not to evaluate "@parmaeter is null" on every row, I can't say. I would hope that it would know that the result isn't going to change.
May 20, 2010 at 9:08 am
jmerrill 28016 (5/20/2010)
If you rearrange each clause so that the test is(@parameter is null or column = @parameter)
it won't ever look at the column values when the parameter is null. SQL Server uses "short circuit boolean evaluation" so it won't actually look at the "column = @parameter" when "@parameter is null" evaluates to true, because the result of "A or B" is known when A is true.
Whether SQL Server is smart enough not to evaluate "@parmaeter is null" on every row, I can't say. I would hope that it would know that the result isn't going to change.
I'm sorry, this is not true. Short circuit evaluation applies strictly only to control of flow language, not to DML statements.
SQL Server is free to evaluate expressions in any order inside DML statements.
I wrote an article on the subject. It's still in draft, but I can send you a copy if you want it.
-- Gianluca Sartori
May 20, 2010 at 9:36 am
I certainly am surprised by that. I'd love to have a copy of the article. How best can I get my email address to you without it going to the world?
May 20, 2010 at 9:41 am
jmerrill 28016 (5/20/2010)
I certainly am surprised by that.
A few posts ago:
May 20, 2010 at 9:47 am
It's true I'm just past novice with SQL SERVER, and this article interests me a great deal, because we have many queries that are used to filter results back to the user.
My question is a simple one, and I noticed it because our test for a NULL parameter comes first, whereas yours comes second.
ie. we
select * from tableX X where
(@demo is NULL or (X.deme = @demo)
AND (@enabled is NULL or (X.enabled = @enabled)
whereas you have the reverse:
select * from tableX X where
(X.demo = @demo or @demo is NULL)
AND (X.enabled = @enabled or @enabled is NULL)
My question is this: does the order within the query matter? ie. does SQL Server work from left to right? and if so, would putting the parameter test "first" in the query matter ie, avoiding the second half of the condition entirely?
Thanks!
Viewing 15 posts - 46 through 60 (of 106 total)
You must be logged in to reply to this topic. Login to reply