June 2, 2011 at 11:07 am
I have a query that generates rows with columns (A,B,C,D), which I want to insert into a table (X) with columns (A,B,C) so that the table does not have duplicate values in (A,B,C), all the while getting @@IDENTITY of each inserted row, so that I can insert into a second table (Y) (D, FK_Identity).
I was considering using an index covering (A,B,C) on the (X), choosing to ignore duplicates. While this would help me avoid duplicates, when a duplicate comes up, I would still need the identity of the first record so that I could insert into (Y) for (D, FK_Identity). I was considering the usage of a trigger, but this is going to take place over many millions rows and I have to think that I'm going to run into too much overhead.
I was originally inserting distinct (A,B,C) into a staging table (Z) with a null FK_Identity column, which I would then populate when inserting into (X) and retrieving the @@IDENTITY. Then, in order to populate (Y), I had to select (A,B,C,D), joining table (Z) on (A,B,C) to get FK_Identity. This was slow, since A,B, and C are nchar values. I have to think that there is a better solution.
Any ideas or suggestions?
June 2, 2011 at 11:12 am
Checkout the output clause available both in insert and update.
I'd use distinct over ignore dup keys... I want to see what's going on!
June 2, 2011 at 11:19 am
argh! Output! That's exactly what I was looking for! Thanks...yeah, I think I'm gonna have to go with the distincts.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply