June 7, 2007 at 7:51 pm
If you are experiencing a deadlock and determine which SPID to kill to resolve it, is there a safe way of making sure that the process didn't already complete and assign another process to the same Process ID before you get ready to execute KILL? If you just quickly do another check, what's to say that at the last moment it doesn't switch it out? I am just here to maintain the system and I don't have a say as to rewriting the existing code and preventing future deadlocks (which will definitely be reoccurring). Is there a way to mark it so it won't be reused, verify it, kill it and allow it to be used again?
June 8, 2007 at 2:52 am
If you have a deadlock, SQL's own internal deadlock detector will detect the deadlock, pick a victim and kill that connection. You don't have to do anything.
Or are you talking about something else?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 11, 2007 at 7:34 am
I don't believe you can GUARANTEE not killling a spid that originally shows up as deadlocked but then gets whacked by sql server and the spid allocated to another process before you issue the kill command. You can check sysprocesses twice and also get dbcc inputbuffer twice, but I don't think you can lock sysprocesses to ensure it still doesn't change even if two checks are identical. Best is probably to allow sql server to handle it.
If you get a bunch of these, consider turning on the trace flag that will have sql server log information about deadlocks so you can isolate the code that is causing the problem and fix it. Also you can set deadlock priority to low for the spids you are less concerned about to make sure they are preferentially killed by sql server.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 11, 2007 at 8:53 am
If you want to track your deadlock information add the following to your SQL Server startup parameters:
-T1204
-T3605
This way basic deadlock information is logged to the SQl Server errorlog.
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
June 11, 2007 at 10:27 am
Sorry. As GilaMonster pointed out, I may have been a little carefree with the terminology. It's not a deadlock situation but rather a case where the whole system is effectively locked up, due to blocking processes. Now, I know this is not an ideal situation, by any means, but as I said before I have no direct say in effecting changes to the system. When this happens, I run a quick query to find out which process is blocking the most processes (both directly and indirectly), where it's coming from, the underlying SQL (and if it's a Job it digs a little deeper), how long it has been running and, if it is not essential (SPID<=50, Backup job, etc.) and is relatively harmless, I kill it and keep the query results for documentation purposes. The way things are now, someone always has to babysit the system and I would like a "guaranteed" method of killing SPIDs. If I arrive at one, I may be able to automate things a little more (using a lot of built-in checks, for safety, of course).
June 11, 2007 at 1:02 pm
To my knowledge you cannot guarantee from one instant to the next that a spid that currently shows blocking other stuff will not be committed/rolledback and the spid reassigned to another connection (via connection pooling or whatever) before you can get the kill issued.
Your efforts would probably be better spent analyzing the calls (and a profile run in general) so you can pinpoing root causes in the code/structures and start fixing things. 🙂 Unfortunately this isn't always possible!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 11, 2007 at 11:59 pm
I would advise against automating a kill script. I've been in that situation. Best thing is to resolve the cause of the problem, not to have an automated process to treat the symptoms.
When you see something blocking lots of processes, see if you can find out what it's running [DBCC INPUTBUFFER(<spid>] and what resources the blocks are on (table, page, rows, etc) and then see if you can fix/change the code in question not to lock so much.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 12, 2007 at 6:42 pm
Yeah.
Therein lies the source of my problem.
I can't really change anything.
June 13, 2007 at 12:07 pm
Why can't you change it? Because it's 3rd party???? or ????
--Jeff Moden
Change is inevitable... Change for the better is not.
June 13, 2007 at 5:36 pm
Yes.
It's third party and the vendor is slow to make any changes.
However, it is not wholly their fault, as everything is highly regulated (and, beyond that, there is also a lot of change control, at all levels of implementation.)
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply