UPDATE , DELETE result is blocking

  • Hi Experts,

    I have a situation where deleting old records is blocking updating latest records on highly transactional table and getting timeout errors from application.

    In details,

    I have one table called Tran_table1 in OLTP database. This Tran_table1 is highly transactional table, it will receive data for insert/update continuously

    While archiving 2 years old records from Tran_table1 into Tran_table1_archive in batches(using DELETE OUTPUT INTO clause)

    ,if there is any UPDATEs on Tran_table1,these updates are getting blocked and result is timeout errors in application.

    Is there any SQL Server hints to avoid blocking ..

    Please suggest if there is any other alternative way.

    Thanks!

  • Update and Delete both required exclusive lock hence it is natural. To avoid you can BCP data to archieve/delete during offpeack hours.

    HTH

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • free_mascot (3/17/2014)


    Update and Delete both required exclusive lock hence it is natural. To avoid you can BCP data to archieve/delete during offpeack hours.

    HTH

    There is no offpeack hours on that database ... around 10+ Updates will run continuously for every second..

    and I need to run archieve/delete daily once ..

    I can use BCP .. but it is also require data DELETE on main tables once bcped .

  • Delete it in small chunk might help?

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • free_mascot (3/18/2014)


    Delete it in small chunk might help?

    Tried with TOP(100) .. but still no lock

  • Jampandu (3/17/2014)


    free_mascot (3/17/2014)


    Update and Delete both required exclusive lock hence it is natural. To avoid you can BCP data to archieve/delete during offpeack hours.

    HTH

    There is no offpeack hours on that database ... around 10+ Updates will run continuously for every second..

    and I need to run archieve/delete daily once ..

    I can use BCP .. but it is also require data DELETE on main tables once bcped .

    What free_mascot is referring , is there any time slot in your production when you can perform this task?

  • Hi,

    Something to think about as a wider resolution to your issue is setting up SNAPSHOT ISOLATION which allows for optimistic locking. Kendra Little wrote a great post on it here:

    http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/

    When implemented it would resolve your issue.

    Ta

    David

    ============================================================
    David

    http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]

  • Snapshot isolation is great when you have a mix of exclusive locks and shared locks because it isolates them from each other. But this is a mix of exclusive locks. Snapshot isolation may help a little, but if you have the situation where a delete and an insert/update need to get to the same page, you'll see contention still.

    First, tune the queries. For example, you said you put a TOP(100) on the query, but with what ORDER BY statement? Contention is all about the locks being held and the length of time they're held. You can reduce the time those locks are held by ensuring that the queries are performing optimally. Next, look to the clustered index. Ensure that the clustered index distributes the data so that you're not attempting to insert into the same spots you're attempting to delete from. In theory at least, even dealing with just a clustered index on an identity column, the inserts should be occurring at the end and the deletes at the beginning.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (3/18/2014)


    Snapshot isolation is great when you have a mix of exclusive locks and shared locks because it isolates them from each other. But this is a mix of exclusive locks. Snapshot isolation may help a little, but if you have the situation where a delete and an insert/update need to get to the same page, you'll see contention still.

    First, tune the queries. For example, you said you put a TOP(100) on the query, but with what ORDER BY statement? Contention is all about the locks being held and the length of time they're held. You can reduce the time those locks are held by ensuring that the queries are performing optimally. Next, look to the clustered index. Ensure that the clustered index distributes the data so that you're not attempting to insert into the same spots you're attempting to delete from. In theory at least, even dealing with just a clustered index on an identity column, the inserts should be occurring at the end and the deletes at the beginning.

    It seems snapshot isolation should be on database level not on session scope (unlike other isolation levels).

    Going with snapshot isolation require lot of changes in other procedures as I have used READPAST table hint.

    the queries I am using are fully optimized ones, There is no ORDER BY clause. I am deleting TOP(100) records based on is_archive column of bit.

  • If you don't care about the performance of the archiving routine, then do it one row at a time.

    Get a list of the rows to be archived into a temp table, then cursor down the temp table, deleting one row at a time. If that still causes a bit of blocking, then you can put a WAITFOR DELAY in the cursor between each row.

    Getting the list into a temp table might still be a blocking point, so consider using NOLOCK here. Before anybody says anything, I know there are potential problems with ghost rows etc., but if you don't care about 100% accuracy in any given archive run, it will cause minimum blocking, and any missed rows will probably be swept up in the next archive run.

    I know using a cursor to do the deletes isn't efficient, but I've used this technique before where I want the archive to have minimum effect on the transaction table, and I don't care how long the archive routine takes.

  • Jampandu (3/18/2014)


    It seems snapshot isolation should be on database level not on session scope (unlike other isolation levels).

    Going with snapshot isolation require lot of changes in other procedures as I have used READPAST table hint.

    the queries I am using are fully optimized ones, There is no ORDER BY clause. I am deleting TOP(100) records based on is_archive column of bit.

    READ_COMMITTED_SNAPSHOT is absolutely at the database level. There is a snapshot isolation level that is at the session level, but it requires changes to code. I seldom recommend that.

    As to your statement that your queries are fully optimized, you have a TOP without an ORDER BY. I guarantee, that's not an optimized query. Further, TOP doesn't limit the scans to only the TOP 100 rows, so if the query is scanning a table or an index, it's scanning the entire thing and then only returning to you the TOP 100 values. Again, not optimized.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Updates and deletes does not necessarily acquire an exclusive lock from the start. Sometimes it might be shared locks (or update locks) competing with exclusive locks causing the problems and not necessarily exclusive\exclusive. Have a look at sys.dm_tran_locks to see which locks are competing.

    A simple index might solve the problem. SQL will use the index(eg. on date) to locate the records without a scan on the clustered index. As mentioned Read committed snapshot might also do the trick when it is a shared/exclusive lock issue.

    Also consider implementing a partitioning strategy on date. I assume updates will not be on the older records that needs to be deleted? This will effectively put older records in its own 'table', and locks wont compete.

    Look at this blog if you also want to imlement a archiving sollution by "switching out" older records. http://blogs.msdn.com/b/felixmar/archive/2011/08.aspx

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply