June 19, 2006 at 1:10 pm
Hello,
Dreading the deadlock - DreadLocks
You may have seen my post from June 1:
http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=8&messageid=284340
I've been investigating the problem in as much depth as I can, and have come to these conclusions:
I can re-create the deadlock here, by:
You may not think this applicable, and I certainly can't prove that it is related, but I'll mention it here: Both the service and the GUI are connecting to the SQL2000 database through DBLIB, not through ADO. Both applications are developed in Delphi, using the SQL Direct components. We all know that DBLIB is deprecated, here is a link to show that: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dblibc/dbc_pdc00b_589v.asp
I have created a test. I have created a simple table:
Create Table IndexTest (
C1 Int Not Null
, C2 Varchar(100) Not Null
)
I have populated this table with 10 million rows.
I have purposefully not indexed either columns.
I thought I would be able to create a deadlock in this table by:
So far, I haven't been able to create a deadlock using the above two steps!
My understanding of a deadlock is that it can only occur when both threads are issuing updates. This is obviously incorrect, as the service/GUI scenario proves.
My three questions are:
- What scenarios can lead to a deadlock?
- Why can a deadlock occur if one thread is issuing 'select/insert/updates', and another thread is issuing only selects?
- Can DBLib be a factor in concurrency issues? Can it be proven that if the code were re-written to use ADO, that locking/contention would be handled at a lower (better) level of granularity?
Thank you for any pointers,
Richard
June 19, 2006 at 1:25 pm
Here's an article on deadlocks for you:
http://msdn2.microsoft.com/en-us/ms191242.aspx
You should also check out sql-server-performance,com
No direct information on converting from dblib to ADO, but to me that would be a no-brainer ({;ve been coding in Delphi since version 3...)
You say "A GUI application is issuing select statements against the table. The conditions in the where clause are dependant on what search criterion the user enters. Most of the time the where clause uses all three indexed columns." - so is that using dynaimc SQL? If so, that's a sure warning flag.
Def. try to get rid of the upper and like if you can. Using functions like that can bypass index usage.
All SQL statements access data. If two (or more) statements (of whatever type) are trying to access the same data at the same time, you get locking contention. There are several ways to avoid this:
1) Proper indexing and SQL statements (see above comments).
2) Do updates in smaller batches.
3) If you don't mind a dirty read for the select, use the with (NOLOCK) hint.
June 19, 2006 at 1:45 pm
>>Each index is on one column only, and none of them are unique or clustered.
Without a clustered index, you likely have a large fragmented heap. More disk I/O to fetch data therefore longer time window for a deadlock to arise.
Are there any triggers on the table ?
June 19, 2006 at 2:01 pm
Hi PW,
I am leaning toward creating the Unique Clustered Index on PatientId. The only thing that makes me hesitate on this, is that I understand that hotspots can be created once you have a clustered index. I've never seen this in practice though - By default, Primary Key indexes are clustered, and I've never seen a "hotspot" like this.
I checked the table for triggers (Exec sp_HelpTrigger PersonInterface), there aren't any.
Thanks,
Richard
June 19, 2006 at 2:08 pm
Hi Pam,
Thank you for your reply.
Yes, it is dynamic SQL, but only the where clause is affected.
Upper and Like are eliminating the use of the index. The select isn't going any slower when I remove the indexes.
Thank you for the link and your help.
Richard
June 20, 2006 at 5:50 am
- check the datatypes of the parameters provided by your application.
You'll be able to avoid index-scans, if you use the correct datatype (as defined for the column of you table).
- avoind functions(upper/lower,..) on columns in where-clauses because they will prevent index-usage for that predicate, and will be non-sargeble.
- al like-predicate is sargeble if it does not start with a wildcard
( _ or %)
- you may be better off if you use a stored proc to fullfill your search-composition and queries.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
June 20, 2006 at 8:30 am
I came across this, which describes the pain of being a victim of a deadlock The DBAs Demise, by Phil Factor, a tribute to 'The Streets of Loredo'. On a more serious note, I'd be interested if adding 'nolock' hints where possible made a difference. The other thought is that it might be better to Set Deadlock_priority low on the less time-critical routines and put a handler into the routine that calls the stored procedures to handle deadlocks by waiting and re-trying. Ron Soukup's books discuss this in some detail and the advice has been repeated elsewhere. Here is a pretty good summary
June 20, 2006 at 9:14 am
I am shocked that so many people talk casually of using NOLOCK in a production system that isn't read only! As if locks were an unnecessary annoyance that can simply be dispensed with. Perhaps locks are a bit like legal 'technicalities'?
Tim Wilkinson
"If it doesn't work in practice, you're using the wrong theory"
- Immanuel Kant
June 20, 2006 at 9:34 am
Hi,
I manage a Borland delphi application running on SQL Server 2000, using dbexpress to connect the database. We had really bad contention between the reporting and the updating aspects of the application, although not usually deadlocks. Updates would not process until the reports had been closed. I managed to resolve the problem by using a combination of the (NOLOCK) locking hint, and by loading report data to a temporary table first before displaying the report (the temporary table would be specific to the session). I also make sure I use IF @@TRANCOUNT>0 COMMIT liberally.
The other problem I had is the way transaction levels are nested in SQL Server. We never had much of a problem on the Oracle based versions of the product, because when you have multiple levels of transaction nesting, issuing a COMMIT will commit all changes at that transaction nesting level, and release the locks held at that level.
SQL Server 2000 will only release the locks when the nesting level returns to zero, ie: all BEGIN TRANS statements have received their subsequent COMMIT
ORACLE
BEGIN TRANS 1
BEGIN TRANS 2
...SQL INSERTS/UPDATES
COMMIT -- commits above inserts & updates and releases locks
...SQL INSERTS/UPDATES
COMMIT -- commits above inserts & updates and releases locks
SQL SERVER 2000
BEGIN TRANS 1
BEGIN TRANS 2
...SQL INSERTS/UPDATES
COMMIT -- transaction level 1 - no locks released
...SQL INSERTS/UPDATES
COMMIT -- transaction level 0 - all locks released
David
If it ain't broke, don't fix it...
June 20, 2006 at 9:37 am
Hi stax68,
I agree with you completely - sweeping dirt under the carpet is what this approach seems to me, and I would only use that approach if no other (more thorough) solution was feasible. Perhaps if time were not on my side.
The trouble with that approach is also that it will only work for MSSQL. Our company's products support other databases as well.
Andrew - Thanks a lot for those excellent links! I've added them to my report and recommendataions!
Thanks to everyone!
Richard
June 20, 2006 at 9:46 am
Hi David,
Thank you for your reply - it certainly is informative.
I'm recommending some changes be made to the service:
Thanks David,
Richard
June 20, 2006 at 11:55 pm
also start traces 1204 and 3605 for all sessions (and add -T1204 and -T3605 to the startup parameters of your sqlserver)
They provide deadlockinfo in your sqlserver-log-files.
- Setting deadlock_priority to low only gives you some control regarding wich would be prefered deadlock victim ! It will not avoid a deadlock.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
June 21, 2006 at 10:17 am
I agree completely with alzdba. My suggestion regarding deadlock_priority was meant to be that you choose which processes are good candidates to be a deadlock victim and then design them to react appropriately to the error by waiting and re-trying. In other words you learn to live with the possibility of deadlocks in a complex system if it is difficult to avoid them. (I find Ron Soukup's book very helpful for explaining this) I tried to do a couple of posts yesterday to clarify this, but the forum software decided to eat what I'd written instead of adding it to the thread.
Someone else misunderstood what I was implying by suggesting the use of NOLOCK hints. There are certainly processes that do not need to apply locks and it can be perfectly legitimate to override the default locking strategy, even in a production system, where it is not appropriate to your needs. In fact, in this case, I was wondering what effect they would have merely for diagnostic reasons. I wasn't suggesting that you go around applying them randomly like some sort of condiment. Thankfully, I've never hit a deadlocking problem that couldn't be resolved by studying the locks that are being used by the various processes when the deadlock happens and checking that they are all appropriate, and hinting where necessary.
June 21, 2006 at 2:10 pm
I still say that unless the relevant part, logical physical and temporal, of your system is read-only, sometimes reads using NOLOCK will be plain wrong, unless by lucky accident. This is true under a very weak (therefore uncontroversial) definition of 'wrong' results: they do not reflect any state that the database has logically speaking been in. I don't like wrong results even when they 'won't be significant' (oh yeah? says who? How wrong can they get and still not be significant?)
I've just reread this and would like to apologise for the rather belligerent tone. Sorry! but I stand by the underlying point...
I also agree with all the other points in the above post and didn't mean to accuse anyone of being an autocondimenter!
Tim Wilkinson
"If it doesn't work in practice, you're using the wrong theory"
- Immanuel Kant
June 21, 2006 at 2:40 pm
Hi Stax68,
I relish your condiments with zeal!
I quite like mustard, the hotter the better.
Ketchup - good for licopene, but also high in glucose/fructose.
Mayonnaise - only for older bread.
If anyone is beliggerent, it's me - pass the hot mustard please!
Richard
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply