Bug or problem with a merge statement SQL Server 2008 R2

  • I'm getting the following error:

    The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches ore than one source row.

    A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times.

    Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows

    I'm not sure if it is the merge or a bug?

    I'm using SQL Server 2008 R2 SP1.

    I saw this link and now I'm wondering but I'm not using 2008 SP 1, I'm using 2008 R2 SP 1

    http://support.microsoft.com/kb/976316

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Welsh Corgi (5/15/2012)


    I'm getting the following error:

    The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches ore than one source row.

    A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times.

    Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows

    I'm not sure if it is the merge or a bug?

    I'm using SQL Server 2008 R2 SP1.

    I saw this link and now I'm wondering but I'm not using 2008 SP 1, I'm using 2008 R2 SP 1

    http://support.microsoft.com/kb/976316

    Really hard to answer your question when some of the necessary information is on your system. Have you queried your data to determine if the error message is actually valid based on what you are attempting to accomplish using MERGE?

  • Welsh Corgi (5/15/2012)


    I'm getting the following error:

    The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches ore than one source row.

    A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times.

    Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows

    I'm not sure if it is the merge or a bug?

    I'm using SQL Server 2008 R2 SP1.

    I saw this link and now I'm wondering but I'm not using 2008 SP 1, I'm using 2008 R2 SP 1

    http://support.microsoft.com/kb/976316['url]

    Really Welsh?! 🙂 How can we possibly offer any guidance or help with what you gave us? Without any table information or sample data, I say it is because there is more than one row in the source that matches the target given the conditions you specified. Of course, you figured that out already from the error...

    Jared
    CE - Microsoft

  • Lynn Pettis (5/15/2012)


    Welsh Corgi (5/15/2012)


    I'm getting the following error:

    The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches ore than one source row.

    A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times.

    Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows

    I'm not sure if it is the merge or a bug?

    I'm using SQL Server 2008 R2 SP1.

    I saw this link and now I'm wondering but I'm not using 2008 SP 1, I'm using 2008 R2 SP 1

    http://support.microsoft.com/kb/976316

    Really hard to answer your question when some of the necessary information is on your system. Have you queried your data to determine if the error message is actually valid based on what you are attempting to accomplish using MERGE?

    Sorry to get you mad at me.

    Valid point not enough information.

    I only took someone elses script created SP's SSIS Packagtes and I get the error.

    I suggested that the developer check out the data as you suggested.

    Thank you and sorry for the lack of information.

    Have a nice day.

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • I don't think anybody's mad. We want to help and, probably more importantly, learn. We just couldn't do much with the error except for take it at its word. Let us know what you find.

    Jared
    CE - Microsoft

  • SQLKnowItAll (5/15/2012)


    I don't think anybody's mad. We want to help and, probably more importantly, learn. We just couldn't do much with the error except for take it at its word. Let us know what you find.

    Thanks for your patience.

    I got a lot of these merge statements and other Open Query Statements handed of to me with issues and I converted as many of the Open Query Statements to Data Transformations to narrow the load window.

    There is no error handling.

    They are working from the their Development Instance so I'm in the process of backing up and restoring the Database to a QA instance so that they can identify first hand what the issues are.

    Are the staging tables you are loading from all at the same "Point in time" meaning you have a complete set where no updates were occurring on the Legacy while staging was being loaded?

    I will look at this on my own in the morning, I start eary and I'm tired.

    Have a nice day.

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • BTW, all of the tables have a million or more records.

    If I provided sample data then I would need to basically find the offending records. 🙂

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Assuming it is indexed properly, 10 million records should not be an issue. Let's say the join condition is called ID... Just do a select from the target table like this:

    SELECT ID, COUNT(ID)

    FROM targetTable

    GROUP BY ID

    HAVING COUNT(ID) > 1 That will tell you the offending records.

    Jared
    CE - Microsoft

  • I got an e-mail telling me to install a Cumulative Update for SQL Server 2008 SP1.

    I mentioned that I could not apply that patch becazuse we are already at SQL Server 2008 R2 SP1 x64.

    THis is the article that they refered to which I listed in my OP.

    http://support.microsoft.com/kb/976316

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • SQLKnowItAll (5/15/2012)


    Assuming it is indexed properly, 10 million records should not be an issue. Let's say the join condition is called ID... Just do a select from the target table like this:

    SELECT ID, COUNT(ID)

    FROM targetTable

    GROUP BY ID

    HAVING COUNT(ID) > 1 That will tell you the offending records.

    Jared,

    I e-mailed the Lead Architect on the Team and he did not challenge me. :satisfied:

    Thanks for your reply. Almost all of the tables have natural composite primary keys based on many columns. We did not design this leagacy system.

    I guess that I or they could concatenate the primary key and insert into a Temp Table.

    How do you disect the INSERT UPDATE and MERGE to capture the error..

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • SQLKnowItAll (5/15/2012)


    Assuming it is indexed properly, 10 million records should not be an issue. Let's say the join condition is called ID... Just do a select from the target table like this:

    SELECT ID, COUNT(ID)

    FROM targetTable

    GROUP BY ID

    HAVING COUNT(ID) > 1 That will tell you the offending records.

    The Lead Architect from the Consulting Company response to you code is as follows:

    Yep, we are well familiar with that snippet of code. But that’s the rub. We aren’t finding any offending rows of data…

    We will run the script, get the error message, re-run it and it runs successfully…crazy.

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Welsh Corgi (5/24/2012)


    SQLKnowItAll (5/15/2012)


    Assuming it is indexed properly, 10 million records should not be an issue. Let's say the join condition is called ID... Just do a select from the target table like this:

    SELECT ID, COUNT(ID)

    FROM targetTable

    GROUP BY ID

    HAVING COUNT(ID) > 1 That will tell you the offending records.

    The Lead Architect from the Consulting Company response to you code is as follows:

    Yep, we are well familiar with that snippet of code. But that’s the rub. We aren’t finding any offending rows of data…

    We will run the script, get the error message, re-run it and it runs successfully…crazy.

    If you aren't finding dupes that way (and you have verified that for all tables involved, then the merge/join condition(s) themselves are introducing a cartesian or your conditions are indeed allowing the hitting of a row under multiple conditions OR (not sure about this) one part of the merge is making a data change that causes it to be processed later under a different part of the merge. I would consider that one a flaw in the merge logic similar to double counting allowed under NOLOCK SELECT statements. But in this case it would be more like Halloween protection and should be guarded against in the engine and would thus be considered a bug.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • --check whether the source has some duplicate value..

    SELECT ID, COUNT(st.ID)

    FROM

    sourcetable st

    GROUP BY. stID

    HAVING COUNT(st.ID) > 1

    --check whether there is issue with the join clause somehow to retrieve dup data

    SELECT ID, COUNT(st.ID)

    FROM targetTable tt inner join

    sourcetable st

    on tt.ID = st.ID

    GROUP BY. stID

    HAVING COUNT(st.ID) > 1

    This is basically to test whether your join is using cartisan or being joined on a non unique column on the source table or is there some issue on the join condition between source and traget table which is generating dup values..

    GulliMeel

    Finding top n Worst Performing queries[/url]
    Improve the performance of Merge Join(special case)
    How to Post Performance Problem -Gail Shaw[/url]

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply