Deadlocks during concurrent delete operation

  • In our environment we have written all our business logics inside a stored procedures. In general, in any bigger transactions where it has integrations with multiple modules we used to populate the required data in a table (its kind of temporary table) with guid column as clustered index and this table would be used subsequently in mutliple stored procedure of the transaction scope and finally we would delete the data from the above table with guid as reference (inside the transaction scope itself) . Most of the times during concurrent usage when the volume of data is high in the above said table we get into deadlocks mostly during the delete operation.

    How to overcome this situation? Please provide your suggestions and help us

  • A man goes to the doctor and says, "Doc, it hurts when I do this."  The doctor's response: "Don't do that."  What about not deleting the data within the transaction?

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • But my question is do we have a better approach to avoid deadlocks? Instead of GUID do we have any better suggestion for clustered index ? Is it a good idea to have clustered index on GUID column.

    If we ignore the delete statement, volume of data in the temporary processing table would increase through the day. Will that cause performance issue in the transaction

  • It's too broad of a question question because everything depends on the totality of your situation.  In general imo clustering on a GUID (which is intentionally random) is likely to be counterproductive.  However, it's not immediately apparent there will be a meaningfully better alternative.  Idk maybe post some code

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • llearnsql wrote:

    But my question is do we have a better approach to avoid deadlocks?

    You bet there is.  But, we can't help because we don't know what criteria you're using to find the rows to delete.  Please post the CREATE TABLE statement, including any an all indexes/defaults AND the criteria you're using that controls what is selected to be deleted.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • and if all this is within a transaction scope why not use a temp table - if created and used within the same transaction it will be available to all stored procs executed within that scope - and as the table will be isolated from all other transactions and is dropped at the end you don't even have to worry about deleting the records at the end.

  • In General, assume i'm doing a search transaction where system would fetch data based on multiple filter criteria. Our core design would be to populate all the records matching key filter criteria would be inserted into a temporary table against a GUID and it will be then processed for additional non-critical filters. Finally once the transaction is completed all the data would be deleted for the GUID.

    Note, GUID is clustered index on the temporary table and in general we don't have any other indexes on that processing table.

    On the suggestion to move/create the temporary tables in tempdb (like a #table), i don't think that would work out. This problem is not in one screen. We would be having 300+ active screens and in most of the modules/screens we have followed the above said approach (only thing the temporary table with guid as clustered index would be unique for each screen and it will not be reused)

    If we move to tempdb then the usage of temp would be very high and will it be adviseable?

  • There is nothing specific .. table would be created with required columns and a guid column. This guid column would be a clustered index on the table. At the end of the transaction all the data for the GUID would be deleted from the table.

    Inside the transaction scope multiple sp's might be called and would be using the data (select, update will happen using the guid column)

  • If I've understood correctly, you write the results of a query to a work table and those results are tagged with a GUID. Other screens then work on this data; reading, amending, etc. and it is then persisted to the main table(s) in your database and the rows for that GUID are deleted from the work table. If that's the case, you could try creating a unique table for each block of work (maybe use the GUID as the table name). This would stop concurrent users from hitting the same table and you could drop the table at the end instead of deleting rows.

  • You stated you're using stored procs.  If so, try using a bigint SEQUENCE rather than a guid.

    Also, it sounds like you insert multiple rows with the same guid/SEQUENCE.  If so, maybe test adding another value to make the clus key unique.  When SQL has to uniquify a row itself, it tends to go thru a lot of gyrations around that uniquifier.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply