February 9, 2006 at 5:29 am
I have a trigger on a Table1, that when an insert happens, it updates a
value on Table2.
Connection1 has a transaction and is doing updates on Table2, therefore
has it locked.
Connection2 comes along and tries to insert a value into Table1, it does
the insert locking Table1 and then the trigger waits for the lock on
Table2 to end so it can complete.
Connection1 then tries to do an insert into Table1. This causes a deadlock
as Table1 is still locked by Connection2 which is waiting for Connection1
to Commit.
Does anyone know a way around this?
My thoughts are that I need to force a lock on Table2 before the insert
happens on Table1. That way Table1 won't be locked until it knows that it
can update Table2.
February 9, 2006 at 5:47 am
The way around a deadlock lies in the design of flow. Deadlocks happens as a result of a conflicting design (ie events does not occur in a consistent pattern or direction).
The way to fix it is to make sure all events have the same 'direction' or 'order of access' - it's not something you should (or even could) code around programmatically.
/Kenneth
February 9, 2006 at 6:05 am
I cannot change the flow. Everything does what it must do in the order it does it.
The trigger ONLY updates a single value in Table2. The insert that happens to cause this update can be run at any time.
Connection1 is a longer running connection that eventually results in creating a new record in Table2. It is only when this new record is created that it can perform its Insert on Table1.
February 9, 2006 at 8:00 am
Still, imo the design of things are still flawed, albeit there's perhaps not much you can do about it.
So, what does these two connections actually do? Is there absolutely no way to make them behave consistently?
When a deadlock occurs, you get error 1205 returned and a recommendation to rerun the transaction.. Would trapping error 1205 and rerun be an option for you?
/Kenneth
February 9, 2006 at 8:11 am
Not that I know much about deadlocks. But I have two suggestions for work arounds.
1. After Connection1 has updated Table2, commit and then update Table1 in a separate transaction.
2. Let Connection1 insert values to be updated in Table2 in a temp table, instead of updating Table2. Then commit the transaction, start a second transaction, update Table1, update Table2 with values from the temp table and commit the second transaction.
February 9, 2006 at 8:26 am
Kenneth is right: if both connections locked Table 1 before Table 2, the problem would go away. In fact, there is no other way to fix deadlocks.
If you really can't change the order of events then you may have to enforce the above rule by deliberately locking tables.
For example, Connection1 could lock table 1 at the last possible time BEFORE updating table 2 - e.g. SELECT COUNT(*) FROM Table1 WITH(TABLOCK, HOLDLOCK) - which takes a shared table lock which will hold Connection2 from inserting a row until Connection1 commits.
The price you pay is reduced concurrency, of course, since you are taking out broader locks than strictly necessary.
I might add that preventing deadlocks altogether is virtually impossible in SQL Server 2000 - in 2005, new features such as snapshot reads should help somewhat.
February 9, 2006 at 8:27 am
Though then the whole transaction thing would be for nothing if split up..
The same net effect would be to just omit the explicit transactions also, though I'm guessing they are there for a reason? A deadlock is sort of a catch 22 situation - once there it can be tricky to get away from it, or handle it gracefully without some rethinking of how stuff are done.
/Kenneth
February 9, 2006 at 8:51 am
Ok, I have fixed the problem.
Table2 was a temporary table that was 100% populated and managed by triggers in order to provide fast, datawarehouse style access to the data.
This meant that even though the flow of any statements may all be correct the "unknown" changes they made to the temp table caused the deadlock.
However, the actual problem was a flaw in this temporary table. As it was just temporary, it had no indexes or keys. This resulted in SQL being unable to lock individual rows and instead would lock the entire table.
Simply adding a primary key to the temporary table let sql rowlock it and eliminate any possibility of a deadlock.
February 9, 2006 at 8:59 am
Glad you solved it
..next step is to figure out how to get rid of the temptable, huh?
I'm guessing it's 'temporary' in a global way, since apparently different connections can access it simultaneously..? If it works, it works, though it seems that there may be some risks with it as well...
Are you sure those who put it there knew what they was doing so that it's 'safe'..?
/Kenneth
February 9, 2006 at 9:36 am
I put it there... hehe.
Its not really a temptable, its a real user table created in the main database, and once created it will stay there for the life of the system. I just kinda call it a temp table as its dynamically created by the system.
The problem is, that the recordset that needs to be returned from the system has x number of ntext columns. This could be 1-100+
This data is stored in one table, each ntext in a different row, so returning it in one select requires a join to this table for every column. Hence the speed issues.
I have been told by a number of SQL 2005 experts that this issue has been resolved in 2005 with the use of the XML data columns.
Can't wait to migrate it all!
February 10, 2006 at 2:46 am
Maybe get SQL Express and get a headstart finding out how things could be improved..?
/Kenneth
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply