October 27, 2013 at 9:25 pm
Comments posted to this topic are about the item Performance of the SQL MERGE vs. INSERT/UPDATE
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
October 27, 2013 at 11:07 pm
Much as we love the Merge statement in my workplace, we seem to be running into deadlocking problems that we haven't had when using the UPDATE/INSERT .. LEFT JOIN approach.
We haven't investigated too deeply but are tending back to UPDATE/INSERT on large datasets. Would be glad to hear from anyone else who has experienced this.
October 27, 2013 at 11:11 pm
guy.stephens (10/27/2013)
Much as we love the Merge statement in my workplace, we seem to be running into deadlocking problems that we haven't had when using the UPDATE/INSERT .. LEFT JOIN approach.We haven't investigated too deeply but are tending back to UPDATE/INSERT on large datasets. Would be glad to hear from anyone else who has experienced this.
I personally have not experienced this issue, however with a 28% performance hit it wouldn't surprise me.
I would say that if you are really expecting to INSERT/UPDATE 100s of 1000s of rows, it might be a good idea to stick to the traditional.
I guess the key words here are "it depends."
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
October 27, 2013 at 11:31 pm
Thanks for the concise article Dwain - an interesting read as usual.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
October 28, 2013 at 2:09 am
dwain.c (10/27/2013)
guy.stephens (10/27/2013)
Much as we love the Merge statement in my workplace, we seem to be running into deadlocking problems that we haven't had when using the UPDATE/INSERT .. LEFT JOIN approach.We haven't investigated too deeply but are tending back to UPDATE/INSERT on large datasets. Would be glad to hear from anyone else who has experienced this.
I personally have not experienced this issue, however with a 28% performance hit it wouldn't surprise me.
I would say that if you are really expecting to INSERT/UPDATE 100s of 1000s of rows, it might be a good idea to stick to the traditional.
I guess the key words here are "it depends."
+1
As for the article it is a nice one and good for someone to understand the working of MERGE statement, and especially for the decide whether to with MERGE or Not.
October 28, 2013 at 2:14 am
Nice comparison.
On the other hand, it could be better to see physical table performance comparison instead of / in addition to temporary tables.
October 28, 2013 at 3:13 am
Morning all,
I think the way that the MERGE source table has been written would definitely under perform compared to the upsert.
e.g.
-- MERGE
MERGE #Target t
USING #Source s
ON s.ID = t.ID
WHEN MATCHED
THEN UPDATE
SET Value = s.Value
WHEN NOT MATCHED
THEN INSERT
( ID, Value )
VALUES
( s.ID
, s.Value
);
This can be wrote as:-
-- MERGE
MERGE #Target t
USING (@ID, @Value
) s (ID, Value)
ON s.ID = t.ID
WHEN MATCHED
THEN UPDATE
SET Value = s.Value
WHEN NOT MATCHED
THEN INSERT
( ID, Value )
VALUES
( s.ID
, s.Value
);
This would out perform any UPSERT as you are basically creating the source table in-memory which will not touch disks, thus the comparison is done in memory.
you just need to work out how to get the @id and @value variables set.
Thanks
Andrew
October 28, 2013 at 3:15 am
I think the testing done here test situations that actually aren't very helpful/useful. If you know you want to do JUST one of the other then do just what you need to do - right tool for the job and all that.
What you REALLY need to test is the classic UPSERT condition - where you want to update a row by a key if the key already exists and insert the row if the key does NOT exist. There are a number of ways to accomplish this in TSQL and it would be best to test each (although honestly the optimal way changes with the percentage of existing rows that are to be updated). If you do this testing, make sure you build code that is isolated from concurrency issues, which most do not do.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 28, 2013 at 3:19 am
.... also in the merge you could have a condition not to update if the value is the same:-
-- MERGE
MERGE #Target t
USING (@ID, @VALUE)
) s (ID, Value)
ON s.ID = t.ID
WHEN MATCHED AND (s.Value <> t.Value)
THEN UPDATE
SET Value = s.Value
WHEN NOT MATCHED
THEN INSERT
( ID, Value )
VALUES
( s.ID
, s.Value
);
October 28, 2013 at 5:26 am
Interesting article!
Paul White posted some useful information about MERGE performance
http://www.sqlservercentral.com/Forums/FindPost1466528.aspx
____________________________________________________
Deja View - The strange feeling that somewhere, sometime you've optimised this query before
How to get the best help on a forum
http://www.sqlservercentral.com/articles/Best+Practices/61537October 28, 2013 at 7:23 am
In our environment we have error handling around the Update/Insert SQL syntax since we don't know what the customer values are when they come into our OLTP system. We have a 'bag' of data from a customer and we're trying to insert it. There are no other tables to left join on, we only have the raw customer data/values. We end up having to attempt an update and check @@rowcount then if 0 attempt to insert. Regardless, it seems the merge statement is significantly more expensive. Good to know.
October 28, 2013 at 8:53 am
Great article!
Anyone else getting a "The resource could not be found" error when trying to download the Test Harness files?
October 28, 2013 at 8:56 am
It's a mistake to not compare apples to apples.
MERGE offers a single statement for the Update/Insert. To fully quantify its benefits, you must also include the "failure" case, where a rolled back transaction due to conflicting foreign keys, constraint violation, or whatever other reason.
The points made are valid, but there are holes in the presentation, that cannot be ignored if a simple "what is the performance difference" is to be verified.
-----------------------------
I enjoy queries!
October 28, 2013 at 9:13 am
Usually merge is modifying a tiny portion of the target rows. I do not think your test case is a valid real-world example since it would only happen one time. Subsequent merges on changes would be a more accurate test.
I ran your test harness, only moving the location of the source and target tables outside of tempdb, and was not able to reproduce your results. Merge performed better in a normally logged database.
October 28, 2013 at 9:36 am
Good write-up. I used to love MERGE when it was first introduced. I was bit on a few bugs but worked through them. However, /* edit Aaron */ wrote a nice write-up describing open issues with MERGE. I've known/ran into a few of them on his list which has further pushed me away from implementing any new MERGE code.
/* Anything is possible but is it worth it? */
Viewing 15 posts - 1 through 15 (of 53 total)
You must be logged in to reply to this topic. Login to reply