September 14, 2012 at 12:57 pm
Hi
I am getting duplicate rows from flat file source and oledb destination has primary unique key on that column
So to remove duplicate what is better
sort ascending the column and check remove duplicate
aggregate using group by
There are not more that 1000 rows for each execution so data is very less
I dont want to use temp table please
I cannot do it with sql command in oledb destination as insert query with select query
Any idea pls
------------------------------------------------------------------------------------
Ashish
September 14, 2012 at 1:59 pm
There's no good way out of this mess, unfortunately. What do you do when a row's identifier is repeated but contains different data in the rest of the columns? Take only the second row?
I would personally allow the process to run as normal, but use the row failure redirector to dump those that fail the key check into an error table that can be manually reviewed after the rest of the process is completed. You'll find the redirector in the error controls when you look at the OLEDB destination.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
September 14, 2012 at 3:08 pm
true
Thats the design flaw from source system
And I am sure I am getting duplicate row so no need to choose the max row also
I will just use sort with dont allow duplicate or ignore the failed row :w00t: creating another bad coding example to eqalize the effect 🙂
------------------------------------------------------------------------------------
Ashish
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply