Catching the source for a Server hang/deadlock

  • Just thought I'd throw this out on the forum in case anyone has bumped into this before.

    I'm on a contract with a small company with a web app (commercial sales) and I've been trying to trap issues that are causing the web site to hang, which seem to be solved by bouncing the service on the SQL Server. I mostly do TSQL development work and am having some new experiences as the server dba and sys admin.

    After attending to some critical deficiencies last week I was hoping maybe this problem would 'go away' but it recurred this morning. I didn't have a Profiler trace running at the time but the web app is running with 'trace' on and writes to a table in the database. The error attributed to the database related to a thread being aborted. My guess is that the application was blocking itself because there were two errors, recorded less than a second apart, both trying to write to a particular table. The most pertinent part of the error seems to be the following lines:

    A database error occured while executing a transaction.

    System.Threading.ThreadAbortException: Thread was being aborted.

    at DataSet.OrderItemDataTable.UpdateAndCreateOrderItemCategories() in E:\TFS\Merge\Release.2.9\Data\Extensions\OrderItem.cs:line 15

    at Transaction.ExecuteTransaction(AcgDataSet dataSet, TransactionBlock logic) in E:\TFS\Merge\Release.2.9\Business\Transaction.cs:line 56

    It seems likely that the OrderItem table is a good place for deadlocks--which is what I'm guessing this is. So, I've started a trace for deadlocks but since this is the first time this has happened in almost a week it seems I might be fishing for a while--and I'm not sure I'm fishing with the right bait--if you follow my drift.

    There also seems to be a lot of info for .Net developers about how to properly roll back aborted threads in order to prevent the application from leaving a thread connection open--which is also a likely source of blocking. So, I'm also looking through the trace log to see if there is any consistent entry prior to this type of crash. So far it seems that they also have moments when there are very high request timeout counts, like 1 per second. Whatever happened this morning also caused a slew of request timeouts.

    Other advice? What would you look for? I'm feeling like I'm groping in the dark here so any suggestions would be appreciated.

    Thanks!

  • You can refer http://www.sqlservercentral.com/Forums/Topic845879-338-1.aspx for dead locks, it depends on which version of sql server you are using.

  • I agree... it sounds like it very well could be deadlocks. I'd make sure to enable the correct trace flags so that the SQL Server error logs will capture detailed information about each and every deadlock.

    "Hangs" could be a different story. It could be "parameter sniffing" problems which seem, IMHO, to abound when ORMs have been used to build an interface.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply