June 4, 2018 at 2:47 pm
Hello SSC,
Ok, I am going to try and explain this. Please bear with me 🙂
I have this stored procedure that INSERT/UPDATES a table based on parameters that are being passed in by a portal or UI. So if this client is new, then we INSERT. If there are any changes then we UPDATE. Simple enough right?The issue that I am having is that it seems the transactions are causing a lag on the UI. The transaction doesn't seem to commit fast enough for the end user to see the result. I have a MERGE (see below), that works, but I need to optimize it. At first glance can anyone see where I can squeeze some performance out of this proc?
I also wanted to mention that 2 of the columns are of XML data type. Which affects performance as well. The data is correct. The XML columns are UPDATING.
Any help is greatly appreciated!
BEGIN
BEGIN TRANSACTION [TRAN1]
BEGIN TRY
MERGE
AS T
USING @tableVar AS S
ON (T.[Col] = .[Col] AND T.[Col1] = .[Col1])
WHEN NOT MATCHED BY TARGET THEN
INSERT (
[val 1]
,[val 2]
,[val 3]
)
VALUES (
S.[val 1]
,S.[val 2]
,S.[val 3] )
WHEN MATCHED
AND CAST(T.[xml_col1] AS NVARCHAR(MAX)) != CAST(S.[xml_col1] AS NVARCHAR(MAX))
OR CAST(T.[xml_col2] AS NVARCHAR(MAX)) != CAST(S.[xml_col2] AS NVARCHAR(MAX))
OR T.[Email] != S.[Email]
THEN
UPDATE SET
T.[val 1] = S.[val 1]
,T.[val 2] = S.[val 2]
,T.[val 3] = S.[val 3];
COMMIT TRANSACTION [TRAN1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [TRAN1]
END CATCH
END
The are no problems, only solutions. --John Lennon
June 4, 2018 at 3:08 pm
Well how long does the proc actually take to run? One thing to try would be to just take out the comparisons on the matched records, it might be faster to just update the record even if it's to the same value than doing those casts and comparisons.
June 4, 2018 at 3:13 pm
Yeah, that's what they did initially and it was killing performance. Production has millions of rows.
My task is to optimize this because everything gets UPDATED regardless if there are changes. This was the original Jira ticket.
The are no problems, only solutions. --John Lennon
June 4, 2018 at 3:23 pm
Lord Slaagh - Monday, June 4, 2018 3:13 PMYeah, that's what they did initially and it was killing performance. Production has millions of rows.My task is to optimize this because everything gets UPDATED regardless if there are changes. This was the original Jira ticket.
It sounds like there's more to this job then. How many rows are actually being passed in in that table variable? I mean if you're passing entire data sets that are millions of rows every time someone calls this in a GUI then yeah it's gonna take a long time.
June 4, 2018 at 3:32 pm
1) How is
clustered? By ( Col, Col1) or something else?
2) How is
normally searched? By ( Col, Col1 ) or by something else?
3) Are they any DML triggers on
?
4) How many rows are in @tableVar? If it's more than 5 even, use a temp table instead, clus on ( Col, Col1 ) just in case.
5) MERGE has some known performance issues. Despite what happened in the past, you should strongly consider UPDATE followed by INSERT instead.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 4, 2018 at 11:25 pm
Comparison on NVARCHAR(MAX) is never going to give you good performance.
June 5, 2018 at 1:02 pm
If there's millions of rows in a table variable, that's likely a problem. Try a temp table instead.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply