September 15, 2008 at 2:42 am
Starting a new role, i've found all development has been done in VIEWS and TSQL.
I've used stored procedures for years, just accepting that it was 'best practice' to do so. Therefore I just continue to do so, rather than questioning why!
I know about security, ease of administration, keeping db code all in one place etc, but I'd like to know about performance.
Should I be convincing my team to change to Stored Procedures and why?
I've read the answer concerns query plan reuse, but i've also read this occurs with TSQL too anyway.
thank you
r
September 15, 2008 at 3:09 am
The performance of adhoc SQL and Stored procedures is pretty much the same. What procedures give you is better plan cache reuse (and if your server has lots of memory and the DB has lots of users, less chance of Token cache 'pollution')
For a piece of adhoc SQL to reuse a plan, it must match exactly (down to the white space) a previously run query. The data types of parameters must also match exactly.
To give you an idea, I cleared out the proc cache on my test machine, then ran the following 4 queries. One would think that there would be only one plan in the cache afterwards, as they're all essentially the same query. In reality, there were 4 plans, each with an execution count of 1.
use AdventureWorks
go
dbcc freeproccache
go
select Name, ProductID, Color, ListPrice from Production.Product where ListPrice=0
go
select Name, ProductID, Color, ListPrice from Production.Product where ListPrice = 0
go
select Name, ProductID, Color, ListPrice from Production.Product where ListPrice = 0.0
go
select Name, ProductID, Color, ListPrice from Production.Product where ListPrice = 0.00
go
select plan_handle, execution_count, text from sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.plan_handle)
With lots of adhoc sql, it's not unusual to find the plan cache filled with multiple different variations of queries, each with very low execution counts and very little chance of being reused.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 15, 2008 at 3:57 am
thank you for your speedy reply Gail 🙂
So if i change >
Select columns from tables where id = 3
into
usp_simpleselect @id=3
would there be multiple queryplans for usp_simpleselect @id=3 , usp_simpleselect @id=7, usp_simpleselect @id=92 etc
or just one, for 'usp_simpleselect' ?
forgive the oversimplified example 🙂
r
September 15, 2008 at 4:17 am
Just one.
There would probably be only one for the following three queries
Select columns from tables where id = 3
Select columns from tables where id = 7
Select columns from tables where id = 92
because the data type of the param is tinyint in all cases. Add this, and you'll probably get a second plan
Select columns from tables where id = 258
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 16, 2008 at 3:08 am
I second Gails comment... and just to add, part of writing SPs is determining the expected limits of variables and using the most suitable data type. By determining the suitable data type it also acts as a security when the unexpected value ever reaches your db. Not only you get performance gain but also added security. I've worked with the worst and best implemented dbs and I've seen this happening. Goodluck with the new job!
September 16, 2008 at 6:36 am
thank you both very much for your input
rgds
r
September 16, 2008 at 11:56 am
Coming from the developer's perspective, I have not found a significant performance difference between views and SPs. And when the performance is close enough, I start emphasizing other characteristics such as maintainability and robustness.
The main difference I see between views and SPs is that we can't pass parameters to a view. It might be nice:
create view SomeView( @StartDate datetime, @EndDate datetime )
as
select a, b, c, d
from dbo.someTable
where CreateDate between @StartDate and @EndDate
go
select * from dbo.SomeView( '2008-01-01', '2008-07-01' );
(On second thought...maybe not.)
On the other hand, SPs are an absolute nightmare for inserts and updates. So go with whichever best suits your needs. It's not an either/or decision -- I think you will find that a well-designed db will have it's share of both.
Tomm Carr
--
Version Normal Form -- http://groups.google.com/group/vrdbms
September 16, 2008 at 12:10 pm
You can always create a function that returns a table to mimic the view functionality but with parameters. Then you go into the arena of # temp tables vs table function for a diversion but thats a different matter. The main advantage of table function is that you can use it in the JOINS just like a view where as you can't in # temp tables.
September 16, 2008 at 1:02 pm
Be curious to know why stored procs are a nightmare for insert/updates.
sprocs encapsulate functionality, make it easier to maintain, IMHO for everyone. Cut and paste on the client means the same plan gets called. Not necessarily the case for cut and pasted inline code. Subtle differences that might not be apparent can cause new plans.
That might or might not be a big deal, but not sure it's worth the chance to me.
September 16, 2008 at 4:03 pm
Steve Jones - Editor (9/16/2008)
Be curious to know why stored procs are a nightmare for insert/updates.
create proc dbo.UpdateTable( @Col1 int, @Col2 varchar(10))
as begin
update dbo.UnderlyingTable
set Col1 = ???
If @Col1 is null then do we update Col1 of the table to NULL or do we interpret it as "leave the current value as is"? If the former, then the user has to set all the values of all the parameters to the current values of the record being updated, except for the one actually being changed, before calling the proc because we have to list every column in the "set". If the latter, then what do we do if we actually want to set the value of a column to NULL?
You might think an Insert sproc wouldn't have that problem, as all column values should be specified anyway. However, any NULL values will override any default values defined on the table. Say Col1 is defined with a default value of 100. An Insert statement that omits Col1 from the column list will cause the default value to be stored in the column. However, if you explicitly list Col1 and specify NULL as the value (which the sproc will have to do), the system will store the NULL in the column -- unless it's also defined NOT NULL, in which case an error will be generated. So you have to make sure the sproc knows about all the NOT NULL and default definitions associated with the table so it knows to use the default value for each NULL. But then again, what if the NULL is the actual value you want stored in the column and you mean to override the default?
Nightmare. Absolute nightmare.
Now, it's true that some views, particularly ones that join several tables, could have similar problems in their trigger code. But "could" is a whole lot better than "definitely."
Tomm Carr
--
Version Normal Form -- http://groups.google.com/group/vrdbms
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply