October 19, 2017 at 11:54 am
Hi all,
I have code in a stored procedure that is like this:
IF EXISTS(SELECT * FROM table_a WHERE ID=@ID AND StatusID = 1) -- ID is a unique column in the table
BEGIN
BEGIN TRANSACTION
INSERT INTO table_b SELECT FROM table_a where ID = @ID
-- another update
-- another update
DELETE table_a WHERE ID=@ID AND StatusID = 1 -- same exact clause with one above on EXISTS ....
SET @DeletedRow = @@ROWCOUNT
COMMIT TRANSACTION
END
Basically, the code is checking if the statusid in the data is still 1, insert a copy of data into table B, then deleted. They way it is deleted, it checks again the status.
All is running ok, until I found 1 case where:
- there is an entry on Table B,
- the data in table A did not get deleted
- the time stamp of the process is very close with another process where StatusID can change from 1 to (something else). The other process is different app using C#/linq
Now, my question is. Is that possible that StatusID has been changed DURING the process in a stored procedure? I was thinking that it never happened since the SQL engine should execute 1 sp from begin to end before executing another process (query), so basically it should never happened. My case telling me that it is possible that during the process, the data has been changed. In my case above, my solution will be adding ROLLBACK if @RowDeleted is 0
October 19, 2017 at 12:09 pm
Going out on a limb here, but based on your pseudo code, yes. It is possible that there is no lock taken on table_a when the data is read and inserted into table_b. During the processing another process comes along and updates the data. Quite possible. I don't have the time to test it at the moment, which is why I said going out on a limb.
There is a way around this, something like this:
IF EXISTS(SELECT * FROM table_a WHERE ID=@ID AND StatusID = 1) -- ID is a unique column in the table
BEGIN
BEGIN TRANSACTION
DELETE FROM table_a
OUTPUT [Deleted].* -- actually list each column
INTO table_b( <columnlist> )
WHERE ID=@ID AND StatusID = 1;
SET @DeletedRow = @@ROWCOUNT
--INSERT INTO table_b SELECT FROM table_a where ID = @ID
-- another update
-- another update
--DELETE table_a WHERE ID=@ID AND StatusID = 1 -- same exact clause with one above on EXISTS ....
COMMIT TRANSACTION
This may need some tweaking but should prevent the data from being updated while processing.
October 19, 2017 at 12:25 pm
Lynn Pettis - Thursday, October 19, 2017 12:09 PMGoing out on a limb here, but based on your pseudo code, yes. It is possible that there is no lock taken on table_a when the data is read and inserted into table_b. During the processing another process comes along and updates the data. Quite possible. I don't have the time to test it at the moment, which is why I said going out on a limb.There is a way around this, something like this:
IF EXISTS(SELECT * FROM table_a WHERE ID=@ID AND StatusID = 1) -- ID is a unique column in the table
BEGIN
BEGIN TRANSACTION
DELETE FROM table_aThis may need some tweaking but should prevent the data from being updated while processing.
Thanks for your reply. The reason I did not delete first because there is a constraint on table_a, and I have to delete the data after all update/delete in the process has been done.
Thanks for clarification that during the process the data can be updated by different query from different App. I am not comfortable with locking the table because it will create a queue where the other process will need to wait until this stored procedure has been completed. The SP could run more than 10 minutes. During the 10 minutes, I do not want the other app waiting. Instead, I want the system answer just return an answer to the user saying "do not update this data" or "the data is not available for the update in the next x minutes"
October 19, 2017 at 1:05 pm
What kind of constraint? Also, with it all wrapped in a transaction it may still work. Worth a test in a development environment. If you don't test it, you never know. The DELETE shouldn't lock the entire table, just the row of the data and maybe the page it resides.
October 19, 2017 at 1:08 pm
You may also want to look at the other processing to determine if it needs to be rewritten to improve performance. Taking ten minutes for what appears to be a small set data seems a bit much.
October 19, 2017 at 1:13 pm
EDIT: This is a worse version of Lynn's code. I didn't see it before posting.
How about preventing any updates done to table_a by other connections?
CREATE TABLE table_a( ID int, StatusID int)
CREATE TABLE table_b( ID int, StatusID int)
INSERT INTO table_a VALUES(1,1),(2,1);
DECLARE @DeletedRow int, @ID int = 1;
DECLARE @table TABLE( ID int, StatusID int);
IF EXISTS(SELECT * FROM table_a WHERE ID=@ID AND StatusID = 1) -- ID is a unique column in the table
BEGIN
BEGIN TRANSACTION;
DELETE table_a
OUTPUT deleted.* INTO @table
WHERE ID=@ID AND StatusID = 1;
SET @DeletedRow = @@ROWCOUNT;
INSERT INTO table_b
SELECT * FROM @table;
-- another update
-- another update
COMMIT TRANSACTION;
END;
SELECT * FROM table_a;
SELECT * FROM table_b;
SELECT @DeletedRow;
GO
DROP TABLE table_a, table_b;
October 20, 2017 at 5:48 am
You say ID is unique. So after inserting the row into TableB and doing other updates, do you need to check the Status again before deleting? It's only going to be that one row, correct?
Now the question becomes, what do you want to happen if Status changed during the time the SP was running? Do a Rollback to remove it from B and remove the other updates? Or just go forward and delete it from A?
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply