May 20, 2003 at 9:27 am
I am creating a SP that passes a variable into a where clause:
Select * from news_items where division is '@Var1'
Var can be
IT
Accounts
Sales
The question is that I can set a default but the defaults needs to be everything ie with no where clause.
Is there a variable that I can pass that includes everything.
I cannot use nested statment
May 20, 2003 at 10:12 am
create procedure MySP
@var varchar(20) = null
as
if @var = null
Select * from news_items
else
Select * from news_items
where division = @var
Steve Jones
May 20, 2003 at 10:13 am
Could try either:
declare @Var1 varchar(50)
set @Var = null
select * from tblNews_Items where division = isnull(@Var1,division)
select * from tblNews_Items where (@Var1 is null or division = @Var1)
May 20, 2003 at 2:33 pm
Steve Jones solution is beter.
There is an 'issue' with OR's and optimizing.
Did not actually check my facts, remember from some article.
May 21, 2003 at 2:26 am
select * from tblNews_Items where division = coalesce(@Var1,division)
May 21, 2003 at 3:54 am
In this situation where there is a minimal difference in with and without the variable on submit I personally prefer Steve's code with a twist.
---------Call this item by MySP and with ot w/out @var variable.
create procedure MySP;1
@var varchar(20) = null
as
SET NOCOUNT ON
if @var = null
EXEC MySP;2
else
EXEC MySP;3 @var
GO
----------This is the item to run for all records.
CREATE procedure MySP;2
AS
SET NOCOUNT ON
Select * from news_items
GO
----------This is the item to run for select records based on division.
CREATE procedure MySP;3
@var varchar(20)
AS
SET NOCOUNT ON
Select * from news_items
where division = @var
GO
The above is a Procedure group. When you call MySP it runs the MySP;1 by default. The advantage of this over what Steve presented was individual MySP;2 and MySP;3 are called based on the determination of @var null or not, but it stores a seperate Execution plan for both so you get a better query performance.
I do like the Coalesce option however, the problem is the execution plan will be based on the lookup of the division index if exists (which with 3 options is usually a bad index due to statistics anyway), if division is not the clustered index (which would really not be a good choice) then you have to make two index jumps to get to the data. Basically when a query is run and it hits a non-clustered index it reads the related clustered index values from there and finds those values then in the clustered index. In the case of
Select * from news_items
it would only read the clustered index (index scan) to return the resultset.
In the case of
select * from tblNews_Items where division = coalesce(@Var1,division)
where @var IS NULL you basically could write this way
select * from tblNews_Items where division = division
In which the lookup, if there is an nonclustered index on division, will lookup in the division index first, and then the clustered index.
Same sort of problem exists in Steve's code.
If @var was NULL the first time the Execution plan that is stored will be based on the clustered index, if not then the division index (if exists). Which later if you run the other the stored execution plan will cause performance hits on the other runs where @var is the opposite.
Most people will tell you in that case add the WITH RECOMPILE option to the SP to regenerate a good Execution Plan each time for best performance. However, each run means time is taking for the Query engine to decide on an Execution Plan to use, so you take a hit there.
With the above MySP;1 has no execution plan as it is a decision to execute one of the other SP's in the group. The others however will store specific Execution Plas centered around there query. And since you have 2 options it is simple to keep up with.
Finally, test each solution and see what works best for you. But I will say if the is an index on Division try the above. If not then consider Patick Wellnick's (well0549) but with one minor change.
select * from tblNews_Items where division = IsNull(@var,division)
The reason is the way Coalesce handles checking for NULL, even thou the output will be the same it will check all submitted options for NULL where IsNull will only check the first and ssume the next is the output otherwise. No major impact but does save soem overhead in memory and CPU cycles.
May 21, 2003 at 5:12 am
Do you think this is a good idea?
create procedure MySP
@var varchar(20) = '%_%'
as
Select * from news_items
where division = @var
What do you think?
May 21, 2003 at 5:15 am
Won't work unless you have a division '%_%' otherwise you have to use LIKE which will give you very poor performance for your situaton.
May 22, 2003 at 4:54 am
Have you tried:
If @Var1 <>''
Exec ('Select * from news_items where division =''' + @Var1 + ''' ')
else
Exec ('Select * from news_items')
You can create your filter and then add it to the select statement and running as dynamic sql.
tung
Luani
Luani
May 22, 2003 at 7:34 am
Antares686:
quote:
In this situation where there is a minimal difference in with and without the variable on submit I personally prefer Steve's code with a twist.
Since there is only three cases of possible divisions, wouldn't it be a good idea to create one SP per division? I.e to Optimize/Compile/Save a query plan for each possible resulting query? I.e using the 'switchyard' (Robert Marda(TM)) approach to the first SP that based on IFs' chooses the proper final SP without parameter?
Regards, Hanslindgren
Edited by - hanslindgren on 05/22/2003 07:37:03 AM
May 26, 2003 at 3:33 pm
quote:
Antares686:quote:
In this situation where there is a minimal difference in with and without the variable on submit I personally prefer Steve's code with a twist.Since there is only three cases of possible divisions, wouldn't it be a good idea to create one SP per division? I.e to Optimize/Compile/Save a query plan for each possible resulting query? I.e using the 'switchyard' (Robert Marda(TM)) approach to the first SP that based on IFs' chooses the proper final SP without parameter?
Regards, Hanslindgren
Brain just isn't seeing what you are asking right now. But are you asking would it be better to write three SPs with a central one called but each having a unique name? If so then let me point out even thou each SP has the same name (technically) they are not actually the same procedcure as far as SQL is concerned, Procedure groups allow you to use the same name but the ;n is important to SQL to know they are not the same and thus each does get optimized, compiled and execution plan saved for each one. It however does make it easier opening in EM (as some other apps) as it will display all within the same edit window.
May 27, 2003 at 3:32 am
Antares686:
quote:
But are you asking would it be better to write three SPs with a central one called but each having a unique name?
quote:
----------This is the item to run for all records.CREATE procedure MySP;2AS
SET NOCOUNT ON
Select * from news_items
GO
----------This is the item to run for select records based on division.
CREATE procedure MySP;3@var varchar(20)
AS
SET NOCOUNT ON
Select * from news_items
where division = @var
GO
I think it is a good idea to keep them with the same name and using the index number to reference them, my question was if it isn't better to also have one sp for every possible 'WHERE' clause? I.e. one for 'IT',
one for 'Accounts' and one for 'Sales'?
Or can the Query Optimizer optimize/parse/compile/save the final sp you wrote with the generic 'WHERE' clause?
Regards, Hanslindgren!
May 27, 2003 at 3:44 am
Ok, understand now.
No the compiler will be fine with a single execution plan for a generic WHERE clause as you are hitting the same column each time and the execution plan is determining index utilization for reaching the data not the actual location of the data. Doing a seperate SP for each should make no difference since the base execution plan each time will be the same.
May 27, 2003 at 5:30 am
Thank you! That was excactly what I wanted to know 🙂
But then I have another issue of discussion. I thought they could be related but maybe not.
Our company upgraded both our software from SS 7.0 to SS 2000 and our hardware to a new cluster configuration. We have alot of similar aggregation SPs' that run every night aggregating between .5 and 5 millions of records each.
Now to the issue which I thought I had resolved in a way less then obvious.
The SPs' before the upgrade took (each) a couple (1-10) minutes to run. After the upgrade they never finished running. Or atleast they all took more then 12 hours to run (and we had to kill them). The problem was that running the same SPs' in QA still took 1-10 minutes. All were GROUPing BY a 'timestamp' column (an integer with number of seconds since '1970-01-01'). The table/index structure remained the same after the upgrade. The funny thing was that when replacing our IN parameters in the SPs' to their static equivalences the SPs' became fast again. After some time I realised that the Query Plan for the SP with the parameters always ended up using Nested Loop Joins and doing not in Parallell. When not using parameters it changed join type and started using parallellism. The ugly part of the story is that when doing a 'parameter swap'
(i.e using a parameter declared as an INT and then: DECLARE @b-2 AS INT
SET @b-2 = 'value of IN parameter to SP'
and then using @b-2 instead of @a in the rest of the SP) they all have the same runtime (and Query Plan) as with static values (and when the SP is run as a query in QA).
I do not know why this 'avoided' the problem and how to protect one self in the future for this problems, all I know is that this was good enough workaround for this particular problem.
If anyone has any comments or solutions to why this happend and/or why my way managed to avoid the problem I would be very happy to hear it!
Regards, Hans!
Edited by - hanslindgren on 05/27/2003 05:33:36 AM
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply