November 27, 2023 at 5:11 pm
Hello, our users are complaining about blocking during a delete loop so I'm thinking that adding begin tran; and commit tran; inside the loops would allow the other queries to squeeze through in between delete loops. Is this how people would achieve a reduction in impact while deleting data and is the below query pretty standard? What happens if there is a communication problem and the connection drops. Would it leave the table locked up?
SET NOCOUNT ON;
DECLARE @r INT;
SET @r = 1;
WHILE @r > 0
BEGIN
BEGIN TRAN;
DELETE top (1000) abc
FROM [database1].[dbo].[table1] abc
JOIN [database2].[dbo].[STG_table] stg on abc.SEQUENCE_NUMBER = stg.SEQUENCE_NUMBER
SET @r = @@ROWCOUNT;
COMMIT TRAN;
END;
November 27, 2023 at 6:55 pm
Hello, our users are complaining about blocking during a delete loop so I'm thinking that adding begin tran; and commit tran; inside the loops would allow the other queries to squeeze through in between delete loops. Is this how people would achieve a reduction in impact while deleting data and is the below query pretty standard? What happens if there is a communication problem and the connection drops. Would it leave the table locked up?
SET NOCOUNT ON;
DECLARE @r INT;
SET @r = 1;
WHILE @r > 0
BEGIN
BEGIN TRAN;
DELETE top (1000) abc
FROM [database1].[dbo].[table1] abc
JOIN [database2].[dbo].[STG_table] stg on abc.SEQUENCE_NUMBER = stg.SEQUENCE_NUMBER
SET @r = @@ROWCOUNT;
COMMIT TRAN;
END;
Have you looked at the execution plan to see what's going on? Also, I don't believe that doing this in a transaction will help anything because a single delete is inherently done in a system transaction.
The other question to ask is how many rows are you trying to delete compared to how many are in the table?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 27, 2023 at 8:04 pm
Would you be able to use TRUNCATE TABLE instead to remove all the rows? That will be vastly faster.
If the table has an identity column, and you want to maintain the high value, you will have to save it and reset it yourself after the TRUNCATE; the TRUNCATE will reset the identity value to the original seed value.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
November 27, 2023 at 8:10 pm
I can't use truncate because not all rows are deleted. The delete is matching rows to rows in a stage table that qualify to be deleted.
The question I have in looking at the example is, will BEGIN TRAN; and COMMIT TRAN; inside the LOOP allow other queries that are blocked to get through in between each loop.
November 27, 2023 at 8:15 pm
Hi, yes that is my concern that BEGAN TRAN; and COMMIT TRAN; might not be improving things and could expose the query to holding a lock should something go wrong with the SSIS package where the step is executed.
November 27, 2023 at 8:33 pm
Since you are not deleting anything out of database2, you could get a performance boost by pulling that table into a table variable/temp table and remove the cross database lookups. I have found in some cases, cross database queries can be slow.
Now, with that being said, you said that there is blocking. My GUESS is that it is on table1, but if it is on STG_table, then pulling that into a temp table should also fix the blocking issue as a SELECT creates a shared lock so as long as nobody is trying to update STG_table, there should be no blocking there.
If the blocking is on table1, then the explicit transaction shouldn't be required and likely won't help anything. What I have seen done before is to put a wait at the end of the WHILE loop to give other transactions a chance to run. If the delete of the 1000 rows completes in 10 seconds (for example, I imagine it is much faster than that per iteration of the loop), then having a 5 second wait at the end of the while loop means that a delete now takes 15 seconds per loop BUT this gives 5 seconds for other queries to run against the data.
Now, what I think that Jeff was alluding to was if the table was 1 million rows and you are only saving 100 of those and deleting everything else, it MAY be faster to move the 100 rows, truncate the table, and migrate the data back. Same idea that Scott was hinting at.
Now, as for your question about leaving the table locked up, I cannot remember offhand what happens, but I am pretty sure that SQL will roll your transaction back if the connection is dropped in the middle of a transaction, but it is easy to test. Steps to test it - restore the data onto a test system as you don't want to "test" on production, run your query, disconnect your network connection, wait for SSMS to give you an error, reconnect your network, reconnect to the database and run "DBCC OPENTRAN" to see if there are any open transactions or try to select from the table and see if it is blocked. If there are open transactions OR your select is blocked, then yep - your transaction will be stuck in an open state and will need to be force-rolled back. Otherwise, SQL terminated the connection for you. My opinion - I would test that yourself rather than following advice online as it MAY be that you have some database setting turned on or some trace flag or something that changes the default behavior and the advice you get on the forum is not true in your specific use case.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 26, 2024 at 10:05 pm
To add to the question on losing a connection that may be in a transaction,
We have had site down maintenance and to where the network is just brought down. The connections on Sql Server are rolled back as I have used DBCC OpenTran to look but found non.
----------------------------------------------------
April 26, 2024 at 10:11 pm
You would want an Explicit transaction if you were doing an operation , for example, on two tables in a way that they both need to maintain integrity. For example deleting from two tables that have certain foreign keys to the same parent table.
After you commit in each loop then the wait at the end of each loop will help with other operations that need the same resources you used and with log operations to record the deletes you just did.
----------------------------------------------------
April 26, 2024 at 10:13 pm
To add to the question on losing a connection that may be in a transaction,
We have had site down maintenance and to where the network is just brought down. The connections on Sql Server are rolled back as I have used DBCC OpenTran to look but found non.
My opinion - this feels like it should be a new post on the forum instead of hijacking this thread.
EDIT - disregard my comment. I thought you were asking a question, but you were just doing 2 posts related to the original thread.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 27, 2024 at 4:53 pm
Hi, yes that is my concern that BEGAN TRAN; and COMMIT TRAN; might not be improving things and could expose the query to holding a lock should something go wrong with the SSIS package where the step is executed.
The explicit transaction is not going to help anything here - the DELETE statement by itself is already in a transaction and if the delete fails will rollback.
The issue you have here could be a couple of things:
One method that can help is to add a delay in the loop. Using WAITFOR DELAY after the delete - even as little as a .25 second wait could alleviate the blocking. But that all depends on why there is blocking in the first place - if it is because the DELETE itself is taking a long time then the only thing you can do is improve the performance of the delete.
How you address the issue really depends on what is causing the problem.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 6, 2024 at 2:22 am
Using WAITFOR DELAY after the delete - even as little as a .25 second wait could alleviate the blocking.
Yes , well stated.
PHXHoward: Do you happen to have an update on if this situation resolved?
----------------------------------------------------
May 6, 2024 at 7:06 pm
Gosh I just don't remember at this point what we ended up doing. I want to say that we either recommended that they rewrite it to use a single database instead of cross database or an index was added. Maybe they just got through it and the need for the archival went away. We receive complaints but very rarely do we get looped in when things go right.
Read all of the advice here and learned quite a bit! Much appreciated.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply