January 14, 2009 at 10:51 pm
My database is getting dead locked very frequently. Please give some information. Its very urgent issue.
January 15, 2009 at 12:02 am
Switch traceflag 1222 on. That will result in a deadlock graph been written to the error log every time a deadlock occurs. Post the result of that graph here.
DBCC TRACEON(1222,-1)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2009 at 1:00 am
Hi Gail,
I am switched to DBCC 1222, but not able to generate any graphs. Infact not able to reproduce the deadlock issue. It happens which testing team is testing with 100 concurrent users. I am able to detect the SQL's which is causing the dead lock.
But now I m stucked. What to do?
January 15, 2009 at 3:16 am
To offer any real help, I need to see the deadlock graph.
Without that, I can give you generic advice - make sure all your queries are written a optimally as possible. Make sure your indexes support the workload. Make sure that you access objects in the same order everwhere. (Don't update tbl1 first and tbl2 second in one proc, and the other way around in another proc)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 17, 2009 at 1:56 pm
Hi,
Sorry for the late reply, I dont know if this helps you but just look at the logs produced by your appliction. and also the sql server logs if you could post the logs on what happend on the chain you should be able to find the object which caused the issue.
do you use Nolocks whilst you retrive data from your query.
Look at issuing a nolock on the tables when do create the stored proc's this should help in getting the locks down.
go with your intution on what the 100 users might be using in common.
and think which could be the potential issue place.
Regards
Vinay
Regards
Vinay
January 17, 2009 at 2:12 pm
bhushanvinay (1/17/2009)
Look at issuing a nolock on the tables when do create the stored proc's this should help in getting the locks down.
Providing you don't mind the chance of incorrect data from time to time. Nolock essentially says to SQL 'I don't mind if my data is occasionally incorrect, just get it for me.'
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 19, 2009 at 3:49 am
Hi,
We have identified the objects where dead lock is hapening. While deleting in one table and simultaniously updating and inserting it happens. We have made it SERIALIZABLE isolation but no effect.
Please tell me, if we ll use the SNAPSHOT isolation? Will it take much more memory for 100 or even 400 concurrent users?
January 19, 2009 at 4:21 am
arup_kc (1/19/2009)
We have identified the objects where dead lock is hapening. While deleting in one table and simultaniously updating and inserting it happens.
Please post the code involved, the table definitions and, if possible the deadlock graph. If you want any useful recommendations, that is.
We have made it SERIALIZABLE isolation but no effect.
Serialisable makes deadlocks worse, much worse. It certainly won't fix a deadlock. The fix for deadlocks is to take locks at the smallest granularity possible and hold them for the shortest time. Serialisable takes lots of locks (to prevent inserts within the range) and holds them for the entire duration of the transaction.
Please tell me, if we ll use the SNAPSHOT isolation? Will it take much more memory for 100 or even 400 concurrent users?
Snapshot doesn't use more memory. It uses more tempDB space as that's where the row version store is. Test it out with your users and your load as it's not possible to guess the impact. It depends on what you're doing in your app.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 19, 2009 at 6:13 am
Not sure if this is of any use, but in addition to the nolocks, things to also check are if you are using begin and commit trans for several of these delete,inserts/update sequences
Since you know where this deadlock is occurring, are the indexes on the tables performing these common deletes/inserts/updates optimised in terms of indexing and code? If you have 100 users potentially doing this same sequence at any point in time, you need to ensure the code and the indexes/table are optimised to process the transactions as quickly as possible
January 19, 2009 at 6:25 am
Try to fine tune the code. Choose proper indexes
Think of enabling row versioning (Pls monitor tempDB growth if you choose to do so)
January 19, 2009 at 6:39 am
GilaMonster (1/15/2009)
To offer any real help, I need to see the deadlock graph.Without that, I can give you generic advice - make sure all your queries are written a optimally as possible. Make sure your indexes support the workload. Make sure that you access objects in the same order everwhere. (Don't update tbl1 first and tbl2 second in one proc, and the other way around in another proc)
That is the best advice to look at.
You can look at BOL for sp_getapplock. This will help if the dead lcok is happening between two instances of the same Stored Proc.
-Roy
January 19, 2009 at 1:15 pm
What is the code you are looking at?
i.e.,
what kind of update or inserts happening what is the sequence in which it is happening.
Please provide the information as descriptively as possible.
Possibly some body will provide a good pattern for you.
How your architecture sits in it.
i know for fact as i had worked with reconciliation models poorly designed data models i can say how deadlocks can be dangerous.
If you are in the early stages this might help a lot.
Regards
Vinay
January 20, 2009 at 12:12 am
Thanks 2 everybody. Currently we are testing with SNAPshot isolation.
I ll post the details after this tesing i.e. code and table structure.
January 20, 2009 at 6:15 am
Please be careful when you are using Snap shot isolation. It uses quite a bit of Temp DB. IO will be higher. Performancwe will go down a bit.
I wrote an article regarding that long time back. Check it out if you want.
http://www.sqlservercentral.com/articles/SQL+Server+2005/62464/
-Roy
January 20, 2009 at 11:28 pm
gail is right. nolock will consider the uncommitted transactions, and deadlock graph will show the complete details.
kshitij kumar
kshitij@krayknot.com
www.krayknot.com
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply