January 12, 2009 at 3:14 pm
A couple of best-practices questions for optimizing stored procedures:
1. Which is better, to declare a variable and assign it a variable, or just use a subquery? For example,
DECLARE @Name VARCHAR(20)
SET @Name = (SELECT Name FROM Users WHERE UserID = @user-id)
IF @Name IS NULL --then do stuff
VS.
IF (SELECT Name FROM Users WHERE UserID = @user-id) IS NULL --then do stuff
I'm guessing the subquery is the most efficient way, but wanted to verify that
2. When joining multiple tables in a stored procedure, is there an advantage to using a view rather than joining all the tables within the stored procedure itself? For example:
SELECT A.Col1, B.Col2, B.Col3, C.Col1
FROM A JOIN B ON A.Col1=B.Col1
--....etc.
VS.
SELECT * FROM vwJoinedTables
I prefer the first method since it is easy to see where the data is coming from, but the app I inherited almost always uses views within the stored procedures, so maybe there is some method to the madness?
3. When is it best to call another stored procedure from a stored procedure? Again, my inherited app does this all the time. For example, this will be within a stored procedure:
usp_DeleteUser @userid=@userid
usp_InsertAudit @userid=@userid, @function='delete'
VS.
DELETE FROM Users WHERE UserID=@UserID
INSERT Audits (function, Data) VALUES('delete', 'Deleted UserID: ' + @user-id)
Thanks for any guidance you can give me
January 13, 2009 at 1:43 am
My opinions.
1. Yeah, the second method would be better, but it depends what your query thereafter does. I try stay away from large queries wrapped in IF statements, rather create stored procedures for the code between the IF statements. The reason for this is that SQL stores an caches the execution plan of the first succesfull execution of a stored procedure, so many if statements could result in no cached plan. Where as if you have one SP with a few underneath it it is seperate plans that are stored and cached. This is my understanding, could be wrong though.
2. I go with option 1 depending. If it is code you using everywhere then the view is better, from a performance difference there is not much in it. The queries use the same indexes etc. But always try and give good justification for creating objects in the Database.
3. Option 1 is the better, because of maintainability and performance.
My opinions. Would be interested to see if someone else has justification for any other of the options
January 13, 2009 at 2:15 am
1. It depends upon the usage of the variable. If you use same variable, many times in sp then better to use variable, instead of sub query each time. And if it is being used only once then better to write subquery.
2. Again it depends upon usage. If you do the same JOIN every time then better to write view. Here you can even create index on view to increase the query performance.
3. For code maintainability, it is always better to write SPs. But sometimes it may cause performance prolem. Sometimes it may cause the deadlocks. So it is always better to check the posibillity of deadlocks before dividing the code in many procedures.
Regards,
Nitin
January 13, 2009 at 6:50 am
1) Actually for performance, I'd do this:
IF NOT EXISTS (SELECT * FROM Users WHERE UserID = @userid)
BEGIN
If you need to use the parameter value @Name later, then the first choice you have is better.
2) No, views are just a mechanism for packaging a query for reasons of security, code reuse, data masking, etc. They don't help performanc in any way.
3) Usually, if you have multiple commands meaning
SELECT ... FROM...
SELECT ... FROM...
INSERT ...
UPDATE...
Then I would seperate those out to different procedures as much as you can. This allows each of these seperate statements to get it's own execution plan.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 13, 2009 at 6:52 am
nitinpatel31 (1/13/2009)
3. For code maintainability, it is always better to write SPs. But sometimes it may cause performance prolem. Sometimes it may cause the deadlocks. So it is always better to check the posibillity of deadlocks before dividing the code in many procedures.
Under what circumstances have you seen seperating a statement from one procedure into a seperate procedure lead to deadlocks? That's a new one for me.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 13, 2009 at 8:16 am
Grant Fritchey (1/13/2009)
Under what circumstances have you seen seperating a statement from one procedure into a seperate procedure lead to deadlocks? That's a new one for me.
If proper transaction are not mantained then it may cause the deadlock. For example if some table has s-lock in parent sp and x-lock in child sp and there are multiple instance (many users/connections are executing it) of parent SPs are running then it is possible to have deadlock. Here transaction is kept in whole code of the parent SP.
We have such scenario in last project. Then we have devided the transations in many transaction to reduce the transaction span.
Regards,
Nitin
January 13, 2009 at 8:45 am
nitinpatel31 (1/13/2009)
Grant Fritchey (1/13/2009)
Under what circumstances have you seen seperating a statement from one procedure into a seperate procedure lead to deadlocks? That's a new one for me.
If proper transaction are not mantained then it may cause the deadlock. For example if some table has s-lock in parent sp and x-lock in child sp and there are multiple instance (many users/connections are executing it) of parent SPs are running then it is possible to have deadlock. Here transaction is kept in whole code of the parent SP.
We have such scenario in last project. Then we have devided the transations in many transaction to reduce the transaction span.
But, a parent/child procedure is still part of a single transaction, so I don't see how that would lead to more deadlocks than simply having both statements in the original proc.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 13, 2009 at 11:34 am
nitinpatel31 (1/13/2009)
If proper transaction are not mantained then it may cause the deadlock. For example if some table has s-lock in parent sp and x-lock in child sp and there are multiple instance (many users/connections are executing it) of parent SPs are running then it is possible to have deadlock. Here transaction is kept in whole code of the parent SP.
A single transaction is a single transaction, regardless of whether it's in one huge procedure of split up among several smaller procedures all called from a parent. Locks are not affected by procedural boundaries.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 13, 2009 at 11:50 am
GilaMonster (1/13/2009)
A single transaction is a single transaction, regardless of whether it's in one huge procedure of split up among several smaller procedures all called from a parent. Locks are not affected by procedural boundaries.
Thanks. I was trying to figure out if I was missing something.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 13, 2009 at 12:02 pm
Grant Fritchey (1/13/2009)
Thanks. I was trying to figure out if I was missing something.
If you are, I'm missing it too.
Although, on SQL 2000, if those sub-procedures had no stable plan (created or accessed temp table inside, for eg), it was possible to end up with long-lasting compile locks on the stored procedures. It won't deadlock, but it may slow things down enough for something else to deadlock. It was because, in SQL 2000, only one connection at a time could compile a procedure and put the plan in cache
It's a pathological case (in the mathematical sense of the word). I've only seen it once and that was with a really large procedure that had 5 or 6 largish subprocedures, all with plans that had to recompile on every run (because of the temp tables), that was running something like 50 times a second, on a server that had very high compiles/sec and recompiles/sec, lots of dynamic SQL and lots of memory.
That's pretty much gone away in 2005, because of the statement level recompile.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 13, 2009 at 12:12 pm
On the original questions:
1. Use a variable if you'll need the value multiple times. Otherwise, don't bother.
2. Use a view if you'll use the exact same (or similar enough) select statement over and over again. Otherwise, don't bother.
3. Either works just fine. Just make sure that both procs are part of a single transaction, so you don't have the deletion succeed and commit, and the audit log fail and rollback.
On the transaction splitting between procs, that makes no sense at all, unless you took ACID transactions and dirtied them up by placing them in a single proc and adding commits that really didn't belong there. Or if the original multi-proc model was holding transactions that it really shouldn't have because of a lack of commits.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply