October 19, 2005 at 10:47 am
I can select eliminating duplicate rows by using DISNTICT keyword but how can I select the rows which are duplicating.
October 19, 2005 at 10:56 am
SELECT with GROUP BY and HAVING COUNT(*) > 1.
This will give you both records...
I wasn't born stupid - I had to study.
October 19, 2005 at 4:37 pm
Add a count to test for duplicates and more
Select duplicating_value, Count(*)
From table
Group By duplicating_value
Having Count(*) > 1
Results:
KORY 5
MARK 2
October 19, 2005 at 8:27 pm
Good thinking!
I wasn't born stupid - I had to study.
October 20, 2005 at 2:50 am
Also, if you want to review the entire rows for analysis that holds the dupe keys, you can do like this:
select *
from myTable
where dupeValue in
( select dupeValue
from myTable
group by dupeValue
having count(*) > 1 )
order by dupeValue
or.. for very large volumes if IN proves to be slow
select t.*
from myTable t
join ( select dupeValue
from myTable
group by dupeValue
having count(*) > 1
) x
on x.dupeValue = t.dupeValue
order by t.dupeValue
/Kenneth
October 20, 2005 at 4:48 am
Is there an echo in here!????
October 20, 2005 at 5:22 am
..and you intended to add something useful to the thread...?
/Kenneth
October 20, 2005 at 5:30 am
what ... rather than to copy and paste you did from copy and paste before that!??
October 20, 2005 at 5:35 am
Well, you've lost me. If you indeed have something to say, just state your mind in clear terms instead of playing around.
/Kenneth
October 20, 2005 at 5:44 am
think you might have been lost before that ... seeing as by the time you came into the thread, it had already been answered twice!? ... and lo and behold, u decide to answer it again with the exact same query!?
October 20, 2005 at 5:51 am
Whoa! I think Kenneth just offered another alternative, (derived table select)....
Personally, I like having a plethora of alternatives...
I wasn't born stupid - I had to study.
October 20, 2005 at 5:58 am
I can see that this leads nowhere, so it's probably as well to end the discussion.
I might suggest that you, dear sir, read the entire posts that you feel inclined to comment upon, especially if the comments in themselves does not add any value to the question at hand.
I did not answer the question posed by the original poster. I did make an additional note about what is usually the next question when dealing with duplicate data, which is also clearly stated as the first line:
Also, if you want to review the entire rows for analysis that holds the dupe keys, you can do like this:
If you can find that 'answered' earlier in the thread, I do apologize, because I must be blind since I couldn't see it.
Thank you.
/Kenneth
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply