January 10, 2008 at 12:54 pm
Hi experts,
I want to do a large delete on table 'A' followed by a large insert of all of table 'B'. What is the safest way to block any action on table 'A' before the delete and release it after the insert of table 'B'?
This is what I came up with:
SET TRANSACTION ISOLATION LEVEL TABLOCKX
GO
BEGIN TRANSACTION
GO
DELETE FROM table_A WHERE somecondition
INSERT INTO table_A SELECT * FROM table_B
GO
COMMIT TRANSACTION
TRUNCATE TABLE table_B
Thank you for your help.
Warm regards,
January 10, 2008 at 1:00 pm
TablockX isn't an an isolation level. It's a lock hint.
You should be able to just do the delete then the insert within the transaction. In the default isolation level, exclusive locks are held until the end of the transaction.
Won't stop a query that's running with (NOLOCK), but very little will.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 10, 2008 at 1:39 pm
Thank you Gail.
So just by putting BEGIN TRANSACTION and COMMIT TRANSACTION around the two operations no one else will be able to access table_A and get incomplete information. I am okay with them not getting their report immediately in their face I just want it to have accurate information when they get it ten seconds later.
Warm regards,
January 10, 2008 at 1:54 pm
Should do. It's easy to test. In one query window, run the begin tran and the delete but nothing else. In another window, try to select from the table. The query should do nothing until you commit the tran in the first query window.
You may want to put a WITH (TABLOCK) hint on the delete, so SQL directly takes a full table lock and doesn't mess with lower granularity locks.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2008 at 8:36 am
Hi Gail,
I put the following around the statements I wanted to lock and I get no complaints during run time from two of the four similar sections in the same procedure. This one procedure does the same thing when it retrieves data from each of four databases accessed through an openquery to a linked server.
The code for the retrieval is basically:
--debug confirmation message logged (starting)
INSERT INTO cl_mast_interim
SELECT * FROM OPENQUERY(linkedserver_01, 'SELECT * FROM cl_mast')
--debug confirmation message logged (retrieve good, next moving)
BEGIN TRANSACTION
DELETE FROM cl_mast WITH (TABLOCKX) WHERE OpCo=01 --changes for other servers
INSERT INTO cl_mast WITH (TABLOCKX)
SELECT * FROM cl_mast_interim
COMMIT TRANSACTION
--debug confirmation message logged (moving good)
TRUNCATE TABLE cl_mast_interim
This is repeated for linkedserver_02, linkedserver_03, and linkedserver_04. The linkedserver_01 full cycle processed okay. The linkedserver_02 retrieved the information and put it in cl_mast_interim but I got no confirmation that the table was actually moved (I have debug messages being logged on either side of the transaction). Can you tell me why the procedure stops with the error:
------------------------------------------------------------------------
Server: Msg 7391, Level 16, State 1, Procedure z_pes_sync_cl_mast_2008_interim, Line 49
The operation could not be performed because the OLE DB provider 'MSDASQL' was unable to begin a distributed transaction.
[OLE/DB provider returned message: [Pervasive][ODBC Client Interface][LNA][Pervasive][ODBC Engine Interface]Driver not capable.]
OLE DB error trace [OLE/DB Provider 'MSDASQL' ITransactionJoin::JoinTransaction returned 0x8004d00a].
------------------------------------------------------------------------
When I took the transaction stuff out for only linkedserver_02 then linkedserver_01, linkedserver_02, and linkedserver_03, processed okay and only linkedserver_04 did not confirm that the table was actually moved. I then took out the transaction stuff for linkedserver_04 and all linkedservers were processed and note that linkedserver_01 and linkedserver_03 still had the transaction stuff.
I have confirmed this every-other-one behavior.
Can you offer anymore advice on what is going on here?
Warm regards,
January 18, 2008 at 12:23 pm
Continuing..., I don't have this working yet.
The below does not work 'entirely'. As described in my reply to Gail there is some sort of promotion to a distributed transaction being attempted and it fails. Peculiarly enough it is on every other repetition (the SQL below is repeated in the same procedure four times with only the name of the linked server changing). If I take the BEGIN/COMMIT out of the first and third repetition then all four repetitions succeed and please note that I am saying the second and fourth repetition still have the BEGIN/COMMIT active. The point of failure is within the transaction, the delete is performed and the insert fails.
Please comment with any suggestions. My need is to be able to run this sync several times throughout the day and I am trying to guard against reports being run against the cl_mast table during the delete/insert (this could be anywhere up to one minute).
Please offer any comments that could be helpful. Thank you.
Warm regards,
------------------------------------------------------------
BEGIN
TRUNCATE TABLE cl_mast_interim
IF (@Debug = 1)
BEGIN
SET @MyErrorMessage = 'Retrieving OpCo 3'
EXEC pwProgressive.dbo.z_pes_ErrorLog_Insert @DBName, @StoredProcedureName, @MyErrorMessage
END
SELECT @StartTime = GETDATE()
INSERT INTO cl_mast_interim
SELECT
null,
3,
cast(Loc_No as integer) as Loc_No ,
cast("Name" as char(40)) as Name ,
cast(Dba_Name as char(40)) as Dba_Name ,
cast(Short_Name as char(15)) as Short_Name ,
FROM openquery(summit032008,'SELECT * from cl_mast')
IF (@Debug = 1)
BEGIN
SELECT @StopTime = GETDATE()
SET @MyErrorMessage = 'Retrive Time OpCo 3: ' + cast((SELECT DATEDIFF(ss, @StartTime, @StopTime)) as varchar(20)) + '(secs). Now Moving Opco 3'
EXEC pwProgressive.dbo.z_pes_ErrorLog_Insert @DBName, @StoredProcedureName, @MyErrorMessage
END
SELECT @StartTime = GETDATE()
BEGIN TRANSACTION
DELETE FROM cl_mast WITH (TABLOCKX) WHERE OpCo=3
INSERT INTO cl_mast WITH (TABLOCKX)
SELECT * FROM cl_mast_interim
COMMIT TRANSACTION
IF (@Debug = 1)
BEGIN
SELECT @StopTime = GETDATE()
SET @MyErrorMessage = 'Move Time OpCo 3: ' + cast((SELECT DATEDIFF(ss, @StartTime, @StopTime)) as varchar(20)) + '(secs).'
EXEC pwProgressive.dbo.z_pes_ErrorLog_Insert @DBName, @StoredProcedureName, @MyErrorMessage
END
TRUNCATE TABLE cl_mast_interim
END
---------------------------------------------------------
January 19, 2008 at 9:56 am
If you have an explicit transaction and within that transaction you reference a remote server, the transaction will be promoted to a distributed transaction.
From the error, it looks like the Pervasive OLE DB driver doesn't support distributed transactions. Perhaps there's a property of the driver that you can set.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 19, 2008 at 7:40 pm
You have no error detection code and no rollback code... and SET XACTABORT has not been turned on to make up for it...
... so why do you need a transaction to begin with?
--Jeff Moden
Change is inevitable... Change for the better is not.
January 21, 2008 at 12:23 am
I've had similar distributed transaction problems with Oracle and FoxPro linked servers. The trick is to get the data to a scratch/temporary table outside of your begin/end transaction. That way the remote server is not involved in the transaction at all.
Note that I haven't read through your code in detail so this may not fully apply to your situation.
January 21, 2008 at 6:40 am
Gail,
I checked on Pervasive and I could not find any documentation on the ODBC supporting distributed transactions. I could not find any kind of configuration setting to change.
Any other thoughts on how to make the DELETE and INSERT safe? Note they are both operating on SQL Server tables.
The only purpose to the table with the ending '_interim' is to temporarily hold the result set coming from the Pervasive linked server (summit032008). Once that INSERT INTO action is done and the result set resides in my SQL Server database table I then use the DELETE to clean out the old out-of-sync data and use the INSERT INTO to move all the new just retrieved data from the '_interim' table to the final destination table.
This must be done every hour during the day if possible. The developers of the Pervasive-based application have no time stamps on their tables so entire tables have to be re-filled on our SQL Server to keep it accurate throughout the day.
Warm regards,
January 21, 2008 at 6:54 am
If you really have to copy the whole table, then you do not need deletes and you don't need the transaction. Insert into a new table... when you have it all ready and your sure nothing is wrong with the data, drop the original table and rename the new table to take it's place.
Total "down time" in SQL Server will be < 65 milliseconds.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 21, 2008 at 7:14 am
Hi Jeff,
I looked up SET XACT_ABORT and I don't believe it would help me achieve my goal.
Being new to SQL my original post was a question (see below). I suggested a transaction because I am new and I don't want people to think that I am not at least trying to think for myself :-). A recent reply by me to Gail includes the code as it now stands, what happens when I execute it (every other transaction), and the error referring to 'distributed transaction'.
I am still looking for an answer to the original question 'What is the safest way...' (see below). All help/advice/comments/directions are welcome, reviewed, and tried.
Thank you for your interest.
Warm regards,
I want to do a large delete on table 'A' followed by a large insert of all of table 'B'. What is the safest way to block any action on table 'A' before the delete and release it after the insert of table 'B'?
This is what I came up with:
January 21, 2008 at 7:27 am
Hi Ian,
Thank you for your interest. There is a previous post by me that contains the real sequence. If you have time and interest you will find that the openquery() to the linked server is filling a permanent but empty table. Instead of a truly temporary table I choose to make it permanent and keep truncating it between uses. Once that '_interim' table is filled I then want to safely DELETE data from the live table and INSERT all the '_interim' data. I am not bound to any particular method. I just don't want reporting to be able to be done against inaccurate data in the table.
Thank you for your time.
Warm regards,
January 21, 2008 at 7:58 am
Hi Jeff,
Thank you for your most recent comment. Even as a newbie I will have to trust my own instincts, the method of dropping and renaming was an idea I had but I am also very new in my company and others did not take to the idea warmly so I was pointed in the DELETE/INSERT direction. But I'm with you. I will rebuild it, drop the old (DROP TABLE dbo.cl_mast), and rename the new (EXEC sp_rename 'cl_mast_interim', 'cl_mast'). Is there any error checking that should surround these two operations?
Warm regards,
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply