January 12, 2008 at 5:58 am
Unfortunately I inherited this system, and it is in production so I can't really make many schema changes to accomodate this issue, so I am trying to figure out a way programmatically (passing a query through ASP) to solve this.
background,
topology:
Multiple application servers that post xml results randomly and in parallel to 3 load balanced IIS 6 servers which in turn query a SQL server 2000 cluster (sp 4 installed, w2k3 sp1 installed).
problem:
In order for the app to report results correctly it updates a report table to add a null row with a few generic settings, then query's a second time to update that row (it just looks for nulls in one of the columns and adds guids and other info to that null row). Unfortunately, without major changes, I can't change the logic (as I said, I inherited this mess), so I need to find a work around.
Since, in theory, there is a possibility for this to happen simultaneously with the application servers, deadlocks occur when the second query fires because they are just looking for the generic row with NULL in it to update. Besides using nolock (which may corrupt I believe, right?) can anyone suggest a way around this? I was thinking some way to check for some kind of exclusive lock before making the query or adding a ref. table that adds a value while it's editing and changes it back when it is done editing? Not sure these are viable solutions, but It's all I can think of. Maybe some way to force this query to run serially only? Or putting it into a SP that allows only one instance at a time?
While I consider myself pretty good with SQL 2000, this one has eluded me, and it really does enforce my awe at the ability of programmers to NOT see the big picture with regards to Database design and topology (this, to me, seems like something any rookie sql dba would see and flag immediately!), sorry, frustration leaked out there for a sec.
For informational purposes, this has been happening for a while but we recently migrated to new SQL hardware which dramatically increased performance, which in turn really ramped up the SQL deadlocks making them more of an issue as opposed to an annoyance.
Any help would be great, even if it's "there is no way without changing the logic itself of the transactions"
January 13, 2008 at 1:28 pm
When the sp inserted a row in the report table, did it use 'BEGIN TRANSACTION....COMMIT TRANSACTION....END TRANSACTION'?
It would lock the table until the 'INSERT' is done. The 'UPDATE' would wait until the table is unlocked.
January 13, 2008 at 1:51 pm
You need to pinpoint the row(s) for UPDATE.
I guess your conditions in WHERE clause are such they cause table/index scan and lock all scanned records.
1st thing you need to try is to convert scans to seeks. Look at indexes, at WHERE clause itself and try to make it more selective.
2nd thing you can do is to separate SELECT from UPDATE. But for this you need to have Primary Key on the table being updated.
If you mean to update single row you may do something like this:
SELECT @pk = [PK Column]
FROM dbo.YourTable
WHERE ...
UPDATE dbo.YourTable
SET .. = ...
WHERE @pk = [PK Column]
This will lock single row you mean to update. It's gonna be shared lock applied by SELECT statement.
If you need to update more 1 row use table variable instead of @pk.
_____________
Code for TallyGenerator
January 14, 2008 at 9:17 am
As Sergiy said, indexing could be a key here. But you should read and UNDERSTAND all of the material contained in parts 1-3 of this series by Bard Duncan to really grasp how to fix deadlocking: http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx
It is pretty amazing how a single appropriate index can prevent almost exactly the situation you describe. 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 14, 2008 at 9:21 am
I will read that. Thanks for the info.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply