Commit transaction inside loops

  • Hello, our users are complaining about blocking during a delete loop so I'm thinking that adding begin tran; and commit tran; inside the loops would allow the other queries to squeeze through in between delete loops.  Is this how people would achieve a reduction in impact while deleting data and is the below query pretty standard? What happens if there is a communication problem and the connection drops. Would it leave the table locked up?

    SET NOCOUNT ON;

    DECLARE @r INT;
    SET @r = 1;

    WHILE @r > 0
    BEGIN
    BEGIN TRAN;
    DELETE top (1000) abc
    FROM [database1].[dbo].[table1] abc
    JOIN [database2].[dbo].[STG_table] stg on abc.SEQUENCE_NUMBER = stg.SEQUENCE_NUMBER
    SET @r = @@ROWCOUNT;
    COMMIT TRAN;
    END;
  • PHXHoward wrote:

    Hello, our users are complaining about blocking during a delete loop so I'm thinking that adding begin tran; and commit tran; inside the loops would allow the other queries to squeeze through in between delete loops.  Is this how people would achieve a reduction in impact while deleting data and is the below query pretty standard? What happens if there is a communication problem and the connection drops. Would it leave the table locked up?

    SET NOCOUNT ON;

    DECLARE @r INT;
    SET @r = 1;

    WHILE @r > 0
    BEGIN
    BEGIN TRAN;
    DELETE top (1000) abc
    FROM [database1].[dbo].[table1] abc
    JOIN [database2].[dbo].[STG_table] stg on abc.SEQUENCE_NUMBER = stg.SEQUENCE_NUMBER
    SET @r = @@ROWCOUNT;
    COMMIT TRAN;
    END;

    Have you looked at the execution plan to see what's going on?  Also, I don't believe that doing this in a transaction will help anything because a single delete is inherently done in a system transaction.

    The other question to ask is how many rows are you trying to delete compared to how many are in the table?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Would you be able to use TRUNCATE TABLE instead to remove all the rows?  That will be vastly faster.

    If the table has an identity column, and you want to maintain the high value, you will have to save it and reset it yourself after the TRUNCATE; the TRUNCATE will reset the identity value to the original seed value.

     

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • I can't use truncate because not all rows are deleted.  The delete is matching rows to rows in a stage table that qualify to be deleted.

    The question I have in looking at the example is, will BEGIN TRAN; and COMMIT TRAN; inside the LOOP allow other queries that are blocked to get through in between each loop.

  • Hi, yes that is my concern that BEGAN TRAN; and COMMIT TRAN; might not be improving things and could expose the query to holding a lock should something go wrong with the SSIS package where the step is executed.

  • Since you are not deleting anything out of database2, you could get a performance boost by pulling that table into a table variable/temp table and remove the cross database lookups. I have found in some cases, cross database queries can be slow.

    Now, with that being said, you said that there is blocking. My GUESS is that it is on table1, but if it is on STG_table, then pulling that into a temp table should also fix the blocking issue as a SELECT creates a shared lock so as long as nobody is trying to update STG_table, there should be no blocking there.

    If the blocking is on table1, then the explicit transaction shouldn't be required and likely won't help anything. What I have seen done before is to put a wait at the end of the WHILE loop to give other transactions a chance to run. If the delete of the 1000 rows completes in 10 seconds (for example, I imagine it is much faster than that per iteration of the loop), then having a 5 second wait at the end of the while loop means that a delete now takes 15 seconds per loop BUT this gives 5 seconds for other queries to run against the data.

    Now, what I think that Jeff was alluding to was if the table was 1 million rows and you are only saving 100 of those and deleting everything else, it MAY be faster to move the 100 rows, truncate the table, and migrate the data back. Same idea that Scott was hinting at.

    Now, as for your question about leaving the table locked up, I cannot remember offhand what happens, but I am pretty sure that SQL will roll your transaction back if the connection is dropped in the middle of a transaction, but it is easy to test. Steps to test it - restore the data onto a test system as you don't want to "test" on production, run your query, disconnect your network connection, wait for SSMS to give you an error, reconnect your network, reconnect to the database and run "DBCC OPENTRAN" to see if there are any open transactions or try to select from the table and see if it is blocked. If there are open transactions OR your select is blocked, then yep - your transaction will be stuck in an open state and will need to be force-rolled back. Otherwise, SQL terminated the connection for you. My opinion - I would test that yourself rather than following advice online as it MAY be that you have some database setting turned on or some trace flag or something that changes the default behavior and the advice you get on the forum is not true in your specific use case.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • To add to the question on losing a connection that may be in a transaction,

    We have had site down maintenance and to where the network is just brought down. The connections on Sql Server are rolled back as I have used DBCC OpenTran to look but found non.

    ----------------------------------------------------

  • You would want an Explicit transaction if you were doing an operation , for example, on two tables in a way that they both need to maintain integrity. For example deleting from two tables that have certain foreign keys to the same parent table.

    1. Delete from table abc as you are doing
    2. delete also from table Xyz

    After you commit in each loop then the wait at the end of each loop will help with other operations that need the same resources you used and with log operations to record the deletes you just did.

    ----------------------------------------------------

  • MMartin1 wrote:

    To add to the question on losing a connection that may be in a transaction,

    We have had site down maintenance and to where the network is just brought down. The connections on Sql Server are rolled back as I have used DBCC OpenTran to look but found non.

    My opinion - this feels like it should be a new post on the forum instead of hijacking this thread.

     

    EDIT - disregard my comment. I thought you were asking a question, but you were just doing 2 posts related to the original thread.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • PHXHoward wrote:

    Hi, yes that is my concern that BEGAN TRAN; and COMMIT TRAN; might not be improving things and could expose the query to holding a lock should something go wrong with the SSIS package where the step is executed.

    The explicit transaction is not going to help anything here - the DELETE statement by itself is already in a transaction and if the delete fails will rollback.

    The issue you have here could be a couple of things:

    1. The delete is taking too long for each iteration - blocking access to the table while performing the delete.
    2. The loop and delete are processing too fast to allow other processes time to perform any work
    3. You are in an AG with a synchronous secondary and other transactions cannot commit until they are committed on the secondary.

    One method that can help is to add a delay in the loop.  Using WAITFOR DELAY after the delete - even as little as a .25 second wait could alleviate the blocking.  But that all depends on why there is blocking in the first place - if it is because the DELETE itself is taking a long time then the only thing you can do is improve the performance of the delete.

    How you address the issue really depends on what is causing the problem.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams wrote:

    Using WAITFOR DELAY after the delete - even as little as a .25 second wait could alleviate the blocking. 

    Yes , well stated.

    PHXHoward: Do you happen to have an update on if this situation resolved?

    ----------------------------------------------------

  • Gosh I just don't remember at this point what we ended up doing. I want to say that we either recommended that they rewrite it to use a single database instead of cross database or an index was added. Maybe they just got through it and the need for the archival went away. We receive complaints but very rarely do we get looped in when things go right.

    Read all of the advice here and learned quite a bit!  Much appreciated.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply