January 29, 2016 at 9:14 am
Hi guys,
I've been studying the most I can about troubleshooting deadlocks and I need your help figuring this out as the only workaround I can think is activating row versioning and I was trying to avoid it.
Basically I have an SP that joins 2 tables (let call it A and B ) and another process that updates table A and inserts a row into table B.
The issue now is that when the 1 session(read) get a shared lock on table A the session2(write) gets an exclusive lock on table B After this session1 tries to get a shared lock on table B and it blocks and then session2 tries an IX on table A and it blocks.
How do you guys would resolve this situation? Something important here is that table A being used in the select statement has a text field that cannot be used as part of an index, so therefore it would always touch the clustered index on the table.
Do I have any alternative without activating row versioning ?
Many thanks,
January 29, 2016 at 12:27 pm
Not sure why you would want to avoid row versioning.
Anyway, the other standard techniques for avoiding deadlocks are:
1. If you know or expect that a row you read might get updated later in the same transaction, use WITH UPDLOCK to get a "semi-shared" lock - other readers are not locked, but all writers are, and all other UPDLOCK requests are. This avoids deadlocks on lock escalation.
2. Code your logic such that locks are always taken in the same sequence. So if procedure a first access table1 and then table2, ensure that procedure b also starts at table1.
For step 2 it can sometimes be needed to do a dummy select of affected rows before running the actual query. If you have no indexes, then this can be an issue.
Also, why are you still using text? This data type was deprecated back in, oh, I think 1995. You should use varchar(max) instead.
January 29, 2016 at 12:40 pm
kudz (1/29/2016)
How do you guys would resolve this situation?
Probably by using one of the row-versioning isolation levels, especially if I don't have code to work through.
Do I have any alternative without activating row versioning ?
Several, but what's wrong with the easy option?
If the TEXT column can be converted to VARCHAR(MAX), it can be part of the include columns of an index, which may be another solution.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 29, 2016 at 1:02 pm
Thank you both for your quick replies.
The main idea of avoiding activating the row versioning was more the challenge of trying to fix it without the easy way of starting using tempdb and having all my problems fixed magically! That's probably what I'm going to end up doing but anyway I thought it would be worth a try.
Both suggestions given by Hugo do not work as the transaction it's just a select statement joining the 2 tables that are being updated/inserted by the other transaction.
Session 1
Select from table B ( wait )
Join table A (granted )
Session 2
Update table B (granted )
Insert into table A (wait )
There is no swap order possible as the first session it's just a select statement and sql choses the order it access the tables right ?
I tHought of swapping the text field by varchar max but won't work as I have rows with more than 4000 chars and that will be the same as I can't index rows with more than 4K chars..
Thanks for the help
January 30, 2016 at 1:51 am
kudz (1/29/2016)
I tHought of swapping the text field by varchar max but won't work as I have rows with more than 4000 chars and that will be the same as I can't index rows with more than 4K chars..
Um....
Firstly, varchar(max) is the replacement for TEXT (which has been deprecated, please remove, do not use any longer, for 10 years now). It doesn't have a 4000 character limit, that's nvarchar(4000). It has a 2 billion character limit same as text. And it absolutely can be included in an index.
It can't be in an index key, the limit for a key is 900 bytes, but it can be in the include columns.
Also, there's a time and a place for challenges (dev servers, blogs, articles, studying), and a time for taking the most effective solution. When there's a problem in production, it's the latter.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 30, 2016 at 1:57 am
GilaMonster (1/30/2016)
kudz (1/29/2016)
I tHought of swapping the text field by varchar max but won't work as I have rows with more than 4000 chars and that will be the same as I can't index rows with more than 4K chars..Um....
Firstly, varchar(max) is the replacement for TEXT (which has been deprecated, please remove, do not use any longer, for 10 years now). It doesn't have a 4000 character limit, that's nvarchar(4000). It has a 2 billion character limit same as text. And it absolutely can be included in an index.
It can't be in an index key, the limit for a key is 900 bytes, but it can be in the include columns.
Also, there's a time and a place for challenges (dev servers, blogs, articles, studying), and a time for taking the most effective solution. When there's a problem in production, it's the latter.
Thanks for the clarification. I really though it was impossible to have columns bigger than 4000 chars in an index(index key or include columns) and therefore I think that's the best approach.
Thanks again 😉
January 30, 2016 at 4:00 am
kudz (1/29/2016)
There is no swap order possible as the first session it's just a select statement and sql choses the order it access the tables right ?
I think you missed one of the things I wrote in my earlier reply:
"For step 2 it can sometimes be needed to do a dummy select of affected rows before running the actual query. If you have no indexes, then this can be an issue."
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply