February 5, 2011 at 3:21 am
We have a stored procedure which is doing inserts, updates and deletes into and from many big tables. There are 20-30 parallel processings for the same sproc. What should I take into considerations to get rid of performance bottlenecks as well as deadlocks.
February 5, 2011 at 6:33 am
That's a pretty big question. Entire books have been written on the subject.
Start with these
Then try Grant's book
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 13, 2011 at 9:22 pm
Hello Gail, I read both the links and they are quite useful. However my main problem is little different. We are using one procedure parallely multiple times (around 30 times parallely) with different set of parameters. In the last section there is an update statement which causes blocking. Can you please suggest something ?
February 13, 2011 at 10:33 pm
What are you looking for?
The principal is the same, find the offending code and fix it. The two links should help you with the first, Grant's book with the second.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 13, 2011 at 11:17 pm
I checked all almost all of the things. Everything looks fine to me. However there is one update which affects around one thousand rows (on an average) on a table. When many sessions execute the same procedure at the same time, it results in deadlock. I am thinking of using Snapshot isolaton.
February 13, 2011 at 11:58 pm
sql_butterfly (2/13/2011)
I checked all almost all of the things. Everything looks fine to me. However there is one update which affects around one thousand rows (on an average) on a table. When many sessions execute the same procedure at the same time, it results in deadlock. I am thinking of using Snapshot isolaton.
We can speculate an awful lot without ever getting to the problem. Any chance of you posting the code and maybe an "Actual Execution Plan"?
--Jeff Moden
Change is inevitable... Change for the better is not.
February 14, 2011 at 12:18 am
Try to access the table same order in each procedure and use NOLOCK hint to avoid the sahred lock(while access the table ). It may help some exetend
February 14, 2011 at 1:08 am
srikant maurya (2/14/2011)
Try to access the table same order in each procedure and use NOLOCK hint to avoid the sahred lock(while access the table ). It may help some exetend
It may also give incorrect and inconsistent data. Will make no difference to an update.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 14, 2011 at 1:32 am
I'm planning to use SET TRANSACTION ISOLATION LEVEL SNAPSHOT before update. Will it help me avoid deadlocks ?
February 14, 2011 at 1:43 am
srikant, No issues of different order usage of tables, since we are using same procedure in different sessions (only with different parameteres)
February 14, 2011 at 2:00 am
sql_butterfly (2/14/2011)
I'm planning to use SET TRANSACTION ISOLATION LEVEL SNAPSHOT before update. Will it help me avoid deadlocks ?
Yes. Just make sure that your code can handle and retry in the case of update conflicts.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 14, 2011 at 2:38 am
That is something which can be achieved using jobs. However I have to work withing codings to avoid deadlocks. I am stuck and not getting any solution.
February 14, 2011 at 4:08 am
under snapshot isolation you won't get deadlocks. However you can get update conflicts which will, similar to a deadlock, result in a transaction getting rolled back.
If you want tuning help, post the deadlock graphs, table definitions, index definitions and execution plan.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 14, 2011 at 4:15 am
You would need to take note of Gail's inputs. Also, just wanted to point out that snapshot isolation would be making use of tempdb. You need to plan accordingly.
M&M
February 14, 2011 at 4:38 am
Gail, Isn't Conflict detection present in Snapshot isolation ? Apart from that, is there any way by which we can neglect deadlocks ? Using some locking hints or something ? (Sorry for being so harsh on this as I think indexes are good enough on the tables as per my investigations)
Viewing 15 posts - 1 through 15 (of 37 total)
You must be logged in to reply to this topic. Login to reply