September 28, 2016 at 6:37 am
I have a step that produces the xml and the MERGE table 1 insert and update, if you run it once
from the application to operate normally but if you run several times simultaneously
application from a session ends well and the rest with deadlocks.
The procedure is inside a TRANSACTION from source code, used EF5.0 and Sql 2014 SP1 CU1.
The procedure is as follows:
ALTER PROCEDURE [dbo].[procedure]
@var1 int,
@var2 xml,
@var3 int
AS
BEGIN
SET NOCOUNT ON
MERGE TABLA AS destino
USING(
SELECT
CampoTabla1 = @var1,
CampoTabla2 = T.Item.value ('@V', 'char(2)')
FROM @var2.nodes('/L/I') AS T(Item)
)AS origen ON
origen.Campo1 = destino.Campo1 AND
origen.Campo2 = destino.Campo2
WHEN MATCHED THEN
UPDATE SET
destino.Campo3 = @var3,
destino.Campo4 = 0
WHEN NOT MATCHED BY SOURCE AND
destino.Campo1 = @var1
AND destino.Campo3 = 0 THEN
UPDATE SET
destino.Campo3 = @var3,
destino.Campo4 = GETDATE()
WHEN NOT MATCHED THEN
INSERT
(Campo1,
Campo2,
Campo3,
Campo4,
Campo5)
VALUES
(origen.Campo1,
origen.Campo2,
@var3,
GETDATE(),
0);
END
I modified the MAXDOP parameter of the instance but still the same error.
THE BBDD was read commited snapshot.
The level of parallelism is at 0 (Unlimited)
The MAXDOP is at 8, since the machine has more than 8 cores.
How can I fix it, something happens to them.
Greetings.
September 28, 2016 at 6:50 am
I'd suggest that you try adding a merge hint to your MERGE. Start with HOLDLOCK and test ... moving up to TABLOCK if necessary.
You should also take steps to ensure that the MERGE is performing as optimally as possible.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
September 28, 2016 at 6:56 am
MERGE is a widely known deadlock generator: https://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/
-- Gianluca Sartori
September 29, 2016 at 3:35 am
According to the page that is attached is a known bug, I have seen that there is a hotfix that you can apply
but the version table only shows up to 201 SQL, I have a sql 2014 with SP1 CU1.
This procedure was before in a SQL 2008R2 SP2 CU1 engine and not failed or not we had
produced the error, the application is the same.
Can you think of any solution to avoid having to modify the procedure.
I only put one example, but with this extructura have 2000 Procedures.
September 29, 2016 at 3:43 am
You're not running the latest build for SQL 2014. You're missing SP2 and CU1. Maybe the fix you're referring to is included in one of those.
I would test in a test environment to see if applying SP2 and CU1 help.
-- Gianluca Sartori
September 29, 2016 at 7:11 am
You may want to incorporate deadlock handling into your code, perhaps using something like the following snippet I wrote to handle deadlock processing in one of our procedures:
-- The following snippet is for building deadlock handling.
declare @ErrorNumber int,
@ErrorLine int,
@ErrorMessage nvarchar(4000),
@ErrorProcedure nvarchar(128),
@ErrorSeverity int,
@ErrorState int;
while 1 = 1
begin
begin transaction;
begin try;
-- <-- code goes here -->
commit;
break;
end try
begin catch
rollback;
select
@ErrorNumber = ERROR_NUMBER(),
@ErrorLine = ERROR_LINE(),
@ErrorMessage = ERROR_MESSAGE(),
@ErrorProcedure = ERROR_PROCEDURE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
if @ErrorNumber = 1205 -- Trap deadlock
begin
WAITFOR DELAY '00:00:00.05' -- Wait for 50 ms
continue;
end
else begin
-- All other errors
raiserror(@ErrorMessage,@ErrorSeverity,@ErrorState);
end
end catch
end
September 30, 2016 at 1:47 am
I tested with tests and test HOLDLOCK With ... moving up to TABLOCK, but the error is still playing.
This occurs when multiple concurrent users try to update.
September 30, 2016 at 2:32 am
I set up a test environment with S2 CU1 patch level and gives the same error, deadlock.
September 30, 2016 at 3:49 am
javirsantos (9/30/2016)
I set up a test environment with S2 CU1 patch level and gives the same error, deadlock.
There's a strong relationship between transaction duration and deadlock occurrence. Speeding up your transaction will reduce the likelihood of a deadlock.
Check that the permanent table has an index which will support the update, and consider resolving the xml into a #temp table or @table variable as a separate step.
Posting an actual (as opposed to estimated) execution plan as a .sqlplan attachment will assist folks in tuning the query.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
October 5, 2016 at 3:43 am
The XML file I have gotten into a @xml and #XML and the deadlock is still playing.
October 5, 2016 at 4:49 am
javirsantos (10/5/2016)
The XML file I have gotten into a @xml and #XML and the deadlock is still playing.
The next step would be to post an execution plan here, as a .sqlplan attachment.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
October 5, 2016 at 7:40 am
Also, could you also post the deadlock graph?
October 19, 2016 at 12:40 am
If I add a CTE in Procedure no longer fails, but I need to find a solution to avoid having to modify 1200 Procedures.
October 19, 2016 at 12:58 am
Adjunto el deadlock graph
October 19, 2016 at 3:15 am
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply