September 25, 2007 at 10:05 pm
Hi Friends,
In my Database one Big StoredProced is there in that sp's DEAD LOCK OCCURE.
How can i handle this issue can u any body plz explain me
What are the steps we have follow plz tell me
Thanks & Regaurds
siva
September 26, 2007 at 7:16 am
You could try breaking the proc up into smaller procs. Check when transactions start and end and reduce them to the smallest necessary size.
Run profiler with the deadlock chain event and examine the deadlock graph to see what statements and what table is involved.
Check the indexing on the table involved in the deadlock, check the query to see if it can be rewritten more optimally.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 26, 2007 at 11:13 am
do you have Begin tran .... end tran statements in the sp?
If so, inside the transaction look for "select" statements that are writing to variables.
ie.,
begin tran
Select @id = name from peopletable where blah = blah
update myothertable set something to somethingelse where id = @id
end tran
if you see something such as this try adding a nolock hint to the Select statement
Select @id = name from peopletable (nolock) where blah = blah
--this will prevent locking of the peopletable until the transaction has been commited.
September 26, 2007 at 12:24 pm
Rather than use nolock, move any select statements that you can out of the transaction completely. This will shorten the duration of the transaction, as well as reducing locking.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 27, 2007 at 6:49 am
I'd also look at the order of processing between the two procedures that are deadlocking. Be sure that if Proc1 is accessing the tables in the order, A, B, C, that Proc2 is not accessing them C, B, A or anything else along those lines. That's the classic deadlock problem.
Just to back up Gail Shaw a bit, not that she needs it at all, but NOLOCK hints don't simply allow dirty reads, they can actually lead to missing data or even duplicate data returned within your query. Itzik Ben-Gan gave a session on this last week at PASS. I wouldn't use them at all unless you're running some sort of statistical query where a few duplicates or a few missing rows don't matter.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 27, 2007 at 7:37 am
One of the recommendations in Books Online under "deadlocks, avoiding" is...
Keep Transactions Short and in One Batch
A deadlock typically occurs when several long-running transactions execute concurrently in the same database. The longer the transaction, the longer the exclusive or update locks are held, blocking other activity and leading to possible deadlock situations.
Keeping transactions in one batch minimizes network roundtrips during a transaction, reducing possible delays in completing the transaction and releasing locks.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 27, 2007 at 11:51 am
Jeff Moden (9/27/2007)
One of the recommendations in Books Online under "deadlocks, avoiding" is...Keep Transactions Short and in One Batch
A deadlock typically occurs when several long-running transactions execute concurrently in the same database. The longer the transaction, the longer the exclusive or update locks are held, blocking other activity and leading to possible deadlock situations.
Keeping transactions in one batch minimizes network roundtrips during a transaction, reducing possible delays in completing the transaction and releasing locks.
Hi Jeff,
Could you direct us to examples of or best practices for batch processing? I'm thinking of a case where I would want to periodically delete records from a table but won't want to delete, say, 1 million all at once.
How would one break the 1 million records into batches of about 1,000 or 5,000 - and is that what is intended by the above recommendation?
Thanks!
webrunner
-------------------
A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html
September 27, 2007 at 2:22 pm
Iterative delete is easy, and often necessary on large deletes. This is how I do it. Others may have their own preferences
DECLARE @Done BIT
SET @Done = 0
SET ROWCOUNT 1000 -- or however many rows you want to delete in one go
WHILE @Done = 0
BEGIN
DELETE FROM Table WHERE Condition
IF @@Rowcount = 0
SET @Done = 1
END
SET ROWCOUNT 0
That said, that's not really what that recommendation was about. Transactions in one batch is something like this (Note I have no error handling or rollback potential in my example. They should be there in real code)
BEGIN TRANSACTION
Insert something
update something
insert something else
COMMIT TRANSACTION
In two batches would be something like this. Imagine that the two procs are called from the client app seperately, with anything from a few seconds to a couple hours between.
Create Procedure Proc1
Begin transaction
Insert something
Update Something
GO
Create Procedure Proc2
Insert something else
Commit Transaction
GO
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 28, 2007 at 8:38 am
Webrunner...
Gail has it right.
I'm always curious about deletes though... why are you deleting instead of just "end dating" or deactivating? Seems like you're losing some history...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 28, 2007 at 11:53 am
In 2005 you can use the TOP clause to avoid the use of "SET ROWCOUNT"
WHILE 1=1
BEGIN
DELETE TOP BATCH_SIZE FROM tablename
WHERE yourcondition ;
IF @@ROWCOUNT < BATCH_SIZE
BREAK;
END
* Noel
October 1, 2007 at 2:39 am
If you are using SQL 2005, you can use SNAPSHOT ISOLATION MODE to reduce a probability of lock/deadlock problems. It is much better then desparate (NOLOCK) hint everywhere I found in manu databases. You can also determine precisely why is it happening using SQL Deadlock Detector - http://www.sqlsolutions.com/products/sql-deadlock-detector/index.html - I have it running 24/7. Because when shit happens, it is too late to check sysprocesses, syslocks etcetera
October 2, 2007 at 1:01 pm
I would add trace flags -T1204 -T3605 to the SQL Server startup parameters. This way when deadlocks occur the relevant information is written out the the SQL Server errorlog. Granted, there has been a great deal of great information added in this post to show how to fix the issue - but one must identify the offending components first !
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
October 2, 2007 at 2:02 pm
if OP wants i can post our blocking script
we have a table where we write data to from sysprocesses every 2 minutes and if there is blocking it sends an email alert to us
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply