November 14, 2017 at 8:25 am
I have simple statement:
USE iMIS_Dev
GO
--BEGIN TRAN
DECLARE @ID INT
DECLARE @CODE VARCHAR(15)
DECLARE @BOOK VARCHAR(15)
DECLARE @DATE DATE
DECLARE @ORDER VARCHAR(25)
DECLARE @COUNTER SMALLINT
DECLARE @FETCH_STATUS INT
DECLARE @SEQN INT
DECLARE @CNT INT
DECLARE @EXIT BIT
SET @ID = 233195
SET @CODE = '2170526B07'
SET @BOOK = '1970314B01'
SET @DATE = '2017-09-04 19:45:00.000'
SET @ORDER = 'SUPPLEMENT'
DECLARE INNER_CURSOR CURSOR
FOR
SELECT SEQN FROM [dbo].[UD_User_Supplements]
WHERE ID = @ID
AND SUP_PRODUCT_CODE = @CODE
AND ORIGINAL_BOOK = @BOOK
AND SUP_SHIP_DATE = @DATE
AND ORDER_TYPE = @ORDER
OPEN INNER_CURSOR
SET @CNT = 1
SET @EXIT = 0
FETCH NEXT FROM INNER_CURSOR INTO @SEQN
--select @SEQN
WHILE (@@FETCH_STATUS = 0) AND (@EXIT = 0) BEGIN
--IF (@@FETCH_STATUS <> -2)
IF @CNT = 1
DELETE FROM [dbo].[UD_User_Supplements] WHERE SEQN = @SEQN and ID = @ID
SET @CNT = 0
SET @EXIT = 1
FETCH NEXT FROM INNER_CURSOR INTO @SEQN
END
CLOSE INNER_CURSOR
DEALLOCATE INNER_CURSOR
The purpose is to delete duplicate records. When I declare INNER_CURSOR and use hard coded data e.g. WHERE ID = 233195 etc. it works fine but when I used variables as in this example it does not. I could not figure out what is wrong with my code. Any suggestions? Thank you.
November 14, 2017 at 8:30 am
rkordonsky 63916 - Tuesday, November 14, 2017 8:25 AMI have simple statement:
USE iMIS_Dev
GO
--BEGIN TRANDECLARE @ID INT
DECLARE @CODE VARCHAR(15)
DECLARE @BOOK VARCHAR(15)
DECLARE @DATE DATE
DECLARE @ORDER VARCHAR(25)
DECLARE @COUNTER SMALLINT
DECLARE @FETCH_STATUS INT
DECLARE @SEQN INT
DECLARE @CNT INT
DECLARE @EXIT BITSET @ID = 233195
SET @CODE = '2170526B07'
SET @BOOK = '1970314B01'
SET @DATE = '2017-09-04 19:45:00.000'
SET @ORDER = 'SUPPLEMENT'DECLARE INNER_CURSOR CURSOR
FOR
SELECT SEQN FROM [dbo].[UD_User_Supplements]
WHERE ID = @ID
AND SUP_PRODUCT_CODE = @CODE
AND ORIGINAL_BOOK = @BOOK
AND SUP_SHIP_DATE = @DATE
AND ORDER_TYPE = @ORDEROPEN INNER_CURSOR
SET @CNT = 1
SET @EXIT = 0
FETCH NEXT FROM INNER_CURSOR INTO @SEQN
--select @SEQN
WHILE (@@FETCH_STATUS = 0) AND (@EXIT = 0) BEGIN
--IF (@@FETCH_STATUS <> -2)IF @CNT = 1
DELETE FROM [dbo].[UD_User_Supplements] WHERE SEQN = @SEQN and ID = @ID
SET @CNT = 0
SET @EXIT = 1
FETCH NEXT FROM INNER_CURSOR INTO @SEQN
ENDCLOSE INNER_CURSOR
DEALLOCATE INNER_CURSORThe purpose is to delete duplicate records. When I declare INNER_CURSOR and use hard coded data e.g. WHERE ID = 233195 etc. it works fine but when I used variables as in this example it does not. I could not figure out what is wrong with my code. Any suggestions? Thank you.
Why are you using a CURSOR here?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
November 14, 2017 at 8:37 am
Phil Parkin - Tuesday, November 14, 2017 8:30 AMWhy are you using a CURSOR here?
I am using cursor here because I have OUTER_CURSOR that is finding the sets of duplicate records and after that I have to use INNER_CURSOR to find SEQN (unique Revcord identifier). In each record SEQN is different. That is why.
November 14, 2017 at 8:41 am
rkordonsky 63916 - Tuesday, November 14, 2017 8:37 AMPhil Parkin - Tuesday, November 14, 2017 8:30 AMWhy are you using a CURSOR here?I am using cursor here because I have OUTER_CURSOR that is finding the sets of duplicate records and after that I have to use INNER_CURSOR to find SEQN (unique Revcord identifier). In each record SEQN is different. That is why.
There is no need for a CURSOR here. There is a very efficient method for finding and deleting duplicates using a CTE with ROW_NUMBER(). If you post sample data, we can help you with it.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
November 14, 2017 at 8:54 am
drew.allen - Tuesday, November 14, 2017 8:41 AMrkordonsky 63916 - Tuesday, November 14, 2017 8:37 AMPhil Parkin - Tuesday, November 14, 2017 8:30 AMWhy are you using a CURSOR here?I am using cursor here because I have OUTER_CURSOR that is finding the sets of duplicate records and after that I have to use INNER_CURSOR to find SEQN (unique Revcord identifier). In each record SEQN is different. That is why.
There is no need for a CURSOR here. There is a very efficient method for finding and deleting duplicates using a CTE with ROW_NUMBER(). If you post sample data, we can help you with it.
Drew
Here is 10 records as example:
99008 912272 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952618 36147221 SUPPLEMENT 0 0x0000000147DCE88C
99008 912465 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952618 36147221 SUPPLEMENT 0 0x0000000147DCE94F
96829 912161 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952617 36147220 SUPPLEMENT 0 0x0000000147DCE81D
96829 912539 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952617 36147220 SUPPLEMENT 0 0x0000000147DCE999
9597 912357 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952616 36147219 SUPPLEMENT 0 0x0000000147DCE8E1
9597 912608 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952616 36147219 SUPPLEMENT 0 0x0000000147DCE9DE
95263 912230 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952615 36147218 SUPPLEMENT 0 0x0000000147DCE862
95263 912412 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952615 36147218 SUPPLEMENT 0 0x0000000147DCE91A
77296 912147 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952614 36147217 SUPPLEMENT 0 0x0000000147DCE80F
77296 912527 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952614 36147217 SUPPLEMENT 0 0x0000000147DCE98D
Second Column contains SEQN (unique identifier)
November 14, 2017 at 8:55 am
drew.allen - Tuesday, November 14, 2017 8:41 AMThere is no need for a CURSOR here. There is a very efficient method for finding and deleting duplicates using a CTE with ROW_NUMBER(). If you post sample data, we can help you with it.Drew
As a quick example for the OP:
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
November 14, 2017 at 9:05 am
rkordonsky 63916 - Tuesday, November 14, 2017 8:54 AMdrew.allen - Tuesday, November 14, 2017 8:41 AMrkordonsky 63916 - Tuesday, November 14, 2017 8:37 AMPhil Parkin - Tuesday, November 14, 2017 8:30 AMWhy are you using a CURSOR here?I am using cursor here because I have OUTER_CURSOR that is finding the sets of duplicate records and after that I have to use INNER_CURSOR to find SEQN (unique Revcord identifier). In each record SEQN is different. That is why.
There is no need for a CURSOR here. There is a very efficient method for finding and deleting duplicates using a CTE with ROW_NUMBER(). If you post sample data, we can help you with it.
Drew
Here is 10 records as example:
99008 912272 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952618 36147221 SUPPLEMENT 0 0x0000000147DCE88C
99008 912465 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952618 36147221 SUPPLEMENT 0 0x0000000147DCE94F
96829 912161 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952617 36147220 SUPPLEMENT 0 0x0000000147DCE81D
96829 912539 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952617 36147220 SUPPLEMENT 0 0x0000000147DCE999
9597 912357 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952616 36147219 SUPPLEMENT 0 0x0000000147DCE8E1
9597 912608 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952616 36147219 SUPPLEMENT 0 0x0000000147DCE9DE
95263 912230 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952615 36147218 SUPPLEMENT 0 0x0000000147DCE862
95263 912412 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952615 36147218 SUPPLEMENT 0 0x0000000147DCE91A
77296 912147 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952614 36147217 SUPPLEMENT 0 0x0000000147DCE80F
77296 912527 2170526B07 2017-09-04 19:45:00.000 NULL 0.00 1 1970314B01 952614 36147217 SUPPLEMENT 0 0x0000000147DCE98DSecond Column contains SEQN (unique identifier)
Look at my signature for how to post sample data. We have no idea what these columns are, and which one are used to determine duplicates. You should also post expected outcome, because we don't know which record you want to retain when there are duplicates.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
November 14, 2017 at 9:19 am
drew.allen - Tuesday, November 14, 2017 9:05 AMLook at my signature for how to post sample data. We have no idea what these columns are, and which one are used to determine duplicates. You should also post expected outcome, because we don't know which record you want to retain when there are duplicates.Drew
This is probably better example:
SEQN ID CODE BOOKNO DATE ORDER_TYPE
----------------------------------------------------------------------------------------------------------------------------
912272 99008 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912465 99008 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912161 96829 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912539 96829 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912357 9597 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912608 9597 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912230 95263 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912412 95263 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
Combination of Columns 2, 3, 4, 5, 6 allows to see the duplicates. Column 1 (SEQN) allows to identify which one from duplicate records would be deleted.
November 14, 2017 at 9:21 am
rkordonsky 63916 - Tuesday, November 14, 2017 9:19 AMdrew.allen - Tuesday, November 14, 2017 9:05 AMLook at my signature for how to post sample data. We have no idea what these columns are, and which one are used to determine duplicates. You should also post expected outcome, because we don't know which record you want to retain when there are duplicates.Drew
This is probably better example:
SEQN ID CODE BOOKNO DATE ORDER_TYPE
----------------------------------------------------------------------------------------------------------------------------
912272 99008 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912465 99008 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912161 96829 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912539 96829 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912357 9597 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912608 9597 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912230 95263 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENT
912412 95263 2170526B07 1970314B01 2017-09-04 19:45:00.000 SUPPLEMENTCombination of Columns 2, 3, 4, 5, 6 allows to see the duplicates. Column 1 (SEQN) allows to identify which one from duplicate records would be deleted.
You obviously didn't read the link in my signature.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
November 14, 2017 at 9:37 am
/* THIS IS HOW YOU SET UP SAMPLE DATA */
/* 1) Create a table, preferably a temp table or table variable.
This gives a clear indication of the columns and data types.
I've had to guess at your data types here. */
DECLARE @UD_User_Supplements TABLE
(
SEQN INT
, ID INT
, CODE CHAR(10)
, BOOKNO CHAR(10)
, [DATE] DATETIME
, ORDER_TYPE CHAR(10)
)
;
/* 2) Use INSERT command to insert sample data into the table. */
INSERT @UD_User_Supplements(SEQN, ID, CODE, BOOKNO, [DATE], ORDER_TYPE)
VALUES
('912272', '99008', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912465', '99008', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912161', '96829', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912539', '96829', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912357', '9597', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT ')
, ('912608', '9597', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT ')
, ('912230', '95263', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912412', '95263', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
;
/* 3) Supply expected results.
This allows people to test their code to make sure it matches what you expect. */
Here is the code. Since you have not provided expected output, I have no idea how closely this matches your expected results.;
WITH Supplements_Ordered AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY ID, CODE, BOOKNO, [DATE], ORDER_TYPE ORDER BY SEQN) AS rn
FROM @UD_User_Supplements
)
DELETE Supplements_Ordered
WHERE rn > 1
;
SELECT *
FROM @UD_User_Supplements
Edit: Corrected misspelling.
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
November 14, 2017 at 9:59 am
drew.allen - Tuesday, November 14, 2017 9:21 AMYou obviously didn't read the link in my signature.Drew
My apologies, Drew. I hope it is what I should post as example:
--===== If the test table already exists, drop it
IF OBJECT_ID('TempDB..#mytable','U') IS NOT NULL
DROP TABLE #mytable
--===== Create the test table with
CREATE TABLE #mytable
(
SEQN INT IDENTITY(1,1) PRIMARY KEY CLUSTERED, --Is an IDENTITY column on real table
ID INT,
MYCODE VARCHAR(15),
MYBOOK VARCHAR(15),
MYDATE DATETIME,
MYORDER VARCHAR(25)
)
--===== Setup any special required conditions especially where dates are concerned
SET DATEFORMAT DMY
--===== All Inserts into the IDENTITY column
SET IDENTITY_INSERT #mytable ON
--===== Insert the test data into the test table
INSERT INTO #mytable
(SEQN, ID, MYCODE, MYBOOK, MYDATE, MYORDER)
SELECT '912272','99008','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
SELECT '912465','99008','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
SELECT '912161','96829','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
SELECT '912539','96829','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
SELECT '912357','9597','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
SELECT '912608','9597','2170526B07','1970314B01','2017-09-04 19:45:00.000','SUPPLEMENT' UNION ALL
--===== Set the identity insert back to normal
SET IDENTITY_INSERT #mytable OFF
November 14, 2017 at 11:22 am
drew.allen - Tuesday, November 14, 2017 9:37 AM/* THIS IS HOW YOU SET UP SAMPLE DATA */
/* 1) Create a table, preferably a temp table or table variable.
This gives a clear indication of the columns and data types.
I've had to guess at your data types here. */
DECLARE @UD_User_Supplements TABLE
(
SEQN INT
, ID INT
, CODE CHAR(10)
, BOOKNO CHAR(10)
, [DATE] DATETIME
, ORDER_TYPE CHAR(10)
)
;/* 2) Use INSERT command to insert sample data into the table. */
INSERT @UD_User_Supplements(SEQN, ID, CODE, BOOKNO, [DATE], ORDER_TYPE)
VALUES
('912272', '99008', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912465', '99008', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912161', '96829', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912539', '96829', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912357', '9597', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT ')
, ('912608', '9597', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT ')
, ('912230', '95263', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
, ('912412', '95263', '2170526B07', '1970314B01', '2017-09-04 19:45:00.000', 'SUPPLEMENT')
;/* 3) Supply expected results.
This allows people to test their code to make sure it matches what you expect. */Here is the code. Since you have not provided expected output, I have no idea how closely this matches your expected results.
;
WITH Supplements_Ordered AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY ID, CODE, BOOKNO, [DATE], ORDER_TYPE ORDER BY SEQN) AS rn
FROM @UD_User_Supplements
)
DELETE Supplements_Ordered
WHERE rn > 1
;
SELECT *
FROM @UD_User_SupplementsEdit: Corrected misspelling.
Thank you so much, Drew! It did produce a desirable result in TEMP table now I will have to use this logic to my table in DEV database.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply