November 18, 2008 at 7:27 am
I have an UPDATE STATEMENT with a FROM CLAUSE, and i am using variables to capture the value and use them else where int the STATEMENT...
The placing of these variables is the difficult part because I put them in the wrong place because i don't understand how the flow works for an UPDATE STAMENT...
UPDATE TABLE1
SET @V1 = [COL1] = CASE @vEVAL = TRUE THEN 1 ELSE 0 END
, [COL2] = @vEVAL
FROM TABLE1
If the col1 and col2 are not in the right order you have a screwed up data... thus when executed which way is the flow of the UPDATE STATEMENT; does it come top down or bottom up with each record???
November 18, 2008 at 7:51 am
Hi John
Can you describe in words what you're attempting to do with this part of the statement:
SET @V1 = [COL1] = CASE @vEVAL = TRUE THEN 1 ELSE 0 END
, [COL2] = @vEVAL
Cheers
ChrisM
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 18, 2008 at 8:01 am
UPDATE TABLE1
SET
, @vEVAL = [COL3]
, @V1 = [COL1] = CASE @vEVAL = TRUE THEN 1 ELSE 0 END
, [COL2] = @vEVAL
FROM TABLE1
What matters most is the flow, cause if i don't have those in the right order my values won't evaluate properly; that's why i just put dummy code, i have posted b4 and all loose focuse on my question and try to fix the code...
if the flow is from the bottom up, then [COL2] would be a NULL, because @vEVAL hasn't been reached yet.
November 18, 2008 at 8:08 am
Hi John
I understand your frustration, which is why I asked if you could put into words what you are trying to achieve. Small changes in this kind of update will have very different consequences on the target table (or, more likely, none at all).
loose focuse on my question and try to fix the code
So...what's the question? What are you trying to do?
Cheers
ChrisM
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 18, 2008 at 8:26 am
I have several different ID's, could have 1 ID with 10 rows another with 15 rows, etc...
This update statement is doing some setups to do an Amortization, so i have do a calc on how many days are between ID row1 paydate and ID row2 paydate etc... and at the same time calculate a count of of how many ID's exists... thus, at each change in ID my count column starts back at 1 and my days between column starts over at 0. I have a variable as well for the ID, so i can track when it does change...
That count is needed to update the main table for setting up other things in the Amortization table.
And I always seem to get the order in where things are placed, because I don't understand the flow...
November 18, 2008 at 8:42 am
Looking good John, and making a lot more sense now. Next thing is for you to give us some sample data in the form of a table script, which contains all of the necessary columns to be able to test an update script against. Something like this...
CREATE TABLE #Test (RowID int, [id] int, ....)
INSERT INTO #Test (RowID, [id] ...)
SELECT 1, 4, ..... UNION ALL
SELECT 2, 4, ..... UNION ALL
SELECT 3, 7, ..... UNION ALL
SELECT 4, 7, .....
Cheers
ChrisM
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 18, 2008 at 8:59 am
IF OBJECT_ID('tempdb..##PS') IS NOT NULL
DROP TABLE ##PS
-- TEMP
CREATE TABLE ##PS
(
[ID_NUMBER] DECIMAL(25, 0) NOT NULL, [PAYMENT_DATE] DATETIME,
[AMOUNT] FLOAT,
[RECCNT] INT,
[AVG_PMT_FREQ] INT
)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110011, '2013-01-01 00:00:00.000', 726342.88)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110011, '2014-01-01 00:00:00.000', 1073941.8)
CREATE CLUSTERED INDEX IX_SP_ID_PD
ON ##PS ([ID_NUMBER], [PAYMENT_DATE])
DECLARE @PrevAcctID AS DECIMAL(25,0)
DECLARE @PRIORDATE AS DATETIME
DECLARE @CURRDATE AS DATETIME
DECLARE @PrevGrpCnt AS INT --Running count resets when account changes
SET @PrevGrpCnt = 0
UPDATE ##PS
SET
[RECCNT] = 1 -- [C].[CNT]
-- CASE
-- WHEN [P].[ID_NUMBER] = @PrevAcctID
-- THEN (CASE WHEN @PrevGrpCnt = 1 THEN 0 ELSE @CURRDATE END)
-- ELSE 0
-- END
-- DO CALC OF DATA DIFF
,
@PRIORDATE = @CURRDATE
,
[AVG_PMT_FREQ] = CASE
WHEN [P].[ID_NUMBER] = @PrevAcctID THEN
CASE
WHEN DATEDIFF(DAY, @PRIORDATE, 0) <> 0 AND @PRIORDATE IS NOT NULL
THEN ABS(DATEDIFF(DAY, @PRIORDATE, @CURRDATE))
ELSE 0
END
ELSE 0
END
,
@CURRDATE = CASE
WHEN [P].[ID_NUMBER] = @PrevAcctID
THEN CASE WHEN @PrevGrpCnt = 1 THEN 0 ELSE [P].[PAYMENT_DATE] END
ELSE [P].[PAYMENT_DATE]
END
,
@PrevGrpCnt = CASE
WHEN [P].[ID_NUMBER] = @PrevAcctID THEN @PrevGrpCnt + 1
ELSE 1 -- Restarts count at "1"
END
,
@PrevAcctID = [P].[ID_NUMBER]
FROM ##PS [P]
WITH (INDEX(IX_SP_ID_PD),TABLOCKX)
WHERE [ID_NUMBER] = 100000110011
SELECT * --[ID_NUMBER], COUNT([ID_NUMBER]) --[PAYMENT_DATE], [AMOUNT]
FROM ##PS
WITH (INDEX(IX_SP_ID_PD),TABLOCKX)
WHERE [ID_NUMBER] = 100000110011
November 18, 2008 at 9:27 am
Nice work John. Lastly, you want the results to look like this? Resetting for each ID_NUMBER?
ID_NUMBER PAYMENT_DATE AMOUNT RECCNT AVG_PMT_FREQ
------------ ----------------------- ---------- ------- ------------
100000110011 2013-01-01 00:00:00.000 726342.88 1 0
100000110011 2014-01-01 00:00:00.000 1073941.8 2 365
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 18, 2008 at 9:29 am
Yes Sir, for both the last Columns...
November 18, 2008 at 9:45 am
To specifically address the original question, so far as I've been able to tell, it's top down... with exceptions. Subqueries will NOT correlate. They will be evaluated pre-update and converted to derived tables, so will not re-evaluate for every row. At least none of the ones I've tried would. While this is good from a performance standpoint, it limits the implementation of the technique somewhat. You can use case statements against variables, or you can look up values in subqueries that don't rely on variables / prior affected rows in the update, but any subquery which depends on either of these factors will not work.
I don't have time at the moment to provide specific examples of what I'm talking about, but every implementation I've tried to do with this type of subquery has failed. The one I did for jcrawf02 the other day worked (well, it did what I intended it to do, it wasn't quite what he needed and needs to be modified) because the subquery did a count of rows that existed pre-update. This is fine, and works as a derived table. The versions I attempted for that performance and integrity issue a few weeks back failed for this reason.
With the top down approach in mind, I usually do something like
UPDATE SomeTable
SET @RCOUNT = RCOUNT = CASE WHEN ID = @PreviousID THEN @RCount + 1 ELSE 1 END,
@PreviousID = ID
FROM SomeTable
The case is evaluated, and then previousID is set to the currentID, it moves to the next row, and starts over.
November 18, 2008 at 9:50 am
Hi John, I reckon this must be close...
[font="Courier New"]IF OBJECT_ID('tempdb..##PS') IS NOT NULL
DROP TABLE ##PS
CREATE TABLE ##PS
([ID_NUMBER] DECIMAL(25, 0) NOT NULL,
[PAYMENT_DATE] DATETIME,
[AMOUNT] FLOAT,
[RECCNT] INT,
[AVG_PMT_FREQ] INT)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110011, '2013-01-01 00:00:00.000', 726342.88)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110011, '2014-01-01 00:00:00.000', 1073941.8)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110012, '2013-01-01 00:00:00.000', 726342.88)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110012, '2014-01-01 00:00:00.000', 1073941.8)
INSERT INTO ##PS ([ID_NUMBER], [PAYMENT_DATE], [AMOUNT])
VALUES(100000110012, '2014-01-03 00:00:00.000', 23.9)
CREATE CLUSTERED INDEX IX_SP_ID_PD
ON ##PS ([ID_NUMBER], [PAYMENT_DATE])
DECLARE @PrevAcctID AS DECIMAL(25,0), @PRIORDATE AS DATETIME, @RECCNT INT, @AVG_PMT_FREQ INT
SET @RECCNT = 1
UPDATE ##PS SET
@RECCNT = [RECCNT] = CASE WHEN @PrevAcctID = ID_NUMBER THEN @RECCNT + 1 ELSE 1 END,
@AVG_PMT_FREQ = AVG_PMT_FREQ = CASE WHEN @PrevAcctID = ID_NUMBER THEN DATEDIFF(DAY, @PRIORDATE, PAYMENT_DATE) ELSE 0 END,
@PrevAcctID = ID_NUMBER,
@PRIORDATE = PAYMENT_DATE
SELECT * FROM ##PS
[/font]
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 18, 2008 at 10:24 am
Ok, so it's top down... thanks for that...
I try to avoid Correlated Query's... That is why the previous Update Example I posted for this question is done... its used here in this updated statement... much much faster...
UPDATE [dbo].[Amortization]
SET
[LAST_PAYMENT_DATE] =.[LAST_PAYMENT_DATE]
,
[NEXT_PAYMENT_DATE] =.[NEXT_PAYMENT_DATE]
,
[CUR_PAYMENT]=.[CUR_PAYMENT]
,
[PMT_FREQ]=.[AVG_PMT_FREQ]
FROM
(
SELECT
[P].[ID_NUMBER]
, [P].[PAYMENT_DATE]AS [LAST_PAYMENT_DATE]
, .[PAYMENT_DATE]AS [NEXT_PAYMENT_DATE]
, [P].[AMOUNT]AS [CUR_PAYMENT]
, [P].[RECCNT]
, .[AVG_PMT_FREQ]
FROM [##PS][P]
WITH (INDEX(IX_SP_ID_PD),TABLOCKX)
INNER JOIN
(
SELECT [OPS].[ID_NUMBER], [OPS].[AVG_PMT_FREQ], [OPS].[PAYMENT_DATE]
FROM [##PS] [OPS]
WITH (INDEX(IX_SP_ID_PD),TABLOCKX)
WHERE [OPS].[RECCNT] = 2
)
ON [P].[ID_NUMBER] = .[ID_NUMBER]
WHERE [P].[RECCNT] = 1
)
INNER JOIN
[dbo].[Amortization] [A]
-- NO INDEX CREATED YET CAUSE IT SLOWS DOWN DURING BIG INSERTS AND BLOW OUT, PLUS IT'S ONLY NEEDED IN STAGE 5 FOR ORDER
ON [A].[ID_NUMBER] = .[ID_NUMBER]
-- NOTICE ONLY ON OUR COMMERCIAL LEASE... SEE IN STAGE 1 I WAS SMART AND TAGGED... DOH
WHERE [A].[TBL_SOURCEID] <> '1'
November 18, 2008 at 10:40 am
john.steinbeck (11/18/2008)
Ok, so it's top down... thanks for that...I try to avoid Correlated Query's... That is why the previous Update Example I posted for this question is done... its used here in this updated statement... much much faster...
You lost me. The example above is just a regular update statement, your first one was incorrect due to where you set @PriorDate, and I wasn't saying that you *should* use correlated subqueries, merely warning you that they wouldn't work if you ever tried.
November 18, 2008 at 10:54 am
I have tried to help with particular problem as well. It is a form of running-total problem and prehaps what would help is more than just a couple of rows of sample data. Multilpe rows with a change in id's would probably be much more beneficial in working the problem.
November 18, 2008 at 10:57 am
Also, besides providing the sample data, be sure to provide the expected results when the query is run. We need something to check against.
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply