March 25, 2011 at 8:38 am
Hello all, I am trying to quickly store a Previous "days" value in the Current "days" record. So the goal is for the table to store Account, Day and Market value with a Running Total and what Yesterdays Market value was.
Here is the code to produce the results and though the running total works I can't figure out why the update of Yesterdays Market value doesn't work, it's just repeating todays. Hopefully something simple I'm doing wrong.
Thanks in advance for the help.
declare @DayID_in int=-1,
@AccountID_in int=-1,
@YesterdayMV float=0.0,
@RunningTotalMV float=0.0,
@NewGroup_bt bit=0
create table #AccountClassInputs(
AccountID_in int NOT NULL,
DayID_in int not null,
MarketValue_fl float NOT NULL,
YesterdayMarketValue_fl float not null,
RunningTotalMV float not null
)
CREATE UNIQUE CLUSTERED INDEX [PK_AccountClassInputs] ON #AccountClassInputs
(
[AccountID_in] ASC,
[DayID_in] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
insert into #AccountClassInputs
values (1,1,10000,0,0)
insert into #AccountClassInputs
values (1,2,11000,0,0)
insert into #AccountClassInputs
values (1,3,12000,0,0)
insert into #AccountClassInputs
values (2,1,11000,0,0)
insert into #AccountClassInputs
values (2,2,12000,0,0)
insert into #AccountClassInputs
values (2,3,13000,0,0)
insert into #AccountClassInputs
values (2,4,14000,0,0)
insert into #AccountClassInputs
values (2,5,15000,0,0)
update #AccountClassInputs
set
@DayID_in=DayID_in, -- anchor column needed to insure consistency
@NewGroup_bt=CASE WHEN @AccountID_in<>AccountID_in THEN 1 ELSE 0 END,
@RunningTotalMV=CASE WHEN @NewGroup_bt=1 THEN 0 ELSE @YesterdayMV + MarketValue_fl END,
YesterdayMarketValue_fl=@YesterdayMV,
@YesterDayMV=CASE WHEN @NewGroup_bt=1 THEN 0 ELSE MarketValue_fl END,
RunningTotalMV=@RunningTotalMV,
@AccountID_in=AccountID_in
FROM #AccountClassInputs OPTION (MAXDOP 1)
select * from #AccountClassInputs
drop table #AccountClassInputs
March 25, 2011 at 9:41 am
Honestly, for sequential code like this and other running totals problems, I avoid the quirky update and use CLR assemblies, or a Static or Fast Forward cursor. Those are simpler, more flexible solutions, and don't rely on lack of parallelism, et al, to complete correctly. And they're just as fast.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 25, 2011 at 10:51 am
Just to verify - you're using sql 2008? I ask because you posted in the sql 7/2000 forum, but your code has sql2008 syntax in the declare statement.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 25, 2011 at 10:52 am
Tried static forward read only cursor and on millions of rows is extremely slow compared to the update. Have not tried with CLR but may be worth looking into. Any examples of running totals with CLR out there? I'll do a google check but if you know of a good one...
March 25, 2011 at 10:52 am
yes 2008, sorry didn't notice the wrong group...
March 25, 2011 at 11:56 am
What will be consuming your results? Doing this kind of work else outside the DB is also a viable option. Most reporting tools have built-in functionality well-suited for solving running totals problems.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
March 25, 2011 at 12:00 pm
opc.three (3/25/2011)
What will be consuming your results? Doing this kind of work else outside the DB is also a viable option. Most reporting tools have built-in functionality well-suited for solving running totals problems.
Many different things will ultimately be using the data, this is just one step in creating the results I need. This will be operating on several millions of rows so it needs to be fast.
March 25, 2011 at 12:10 pm
1. Is the end result the output from the select, or do you need those two columns updated in the table?
2. Are the DayID_in values always going to be consecutive for an AccountID_in value?
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 25, 2011 at 12:14 pm
WayneS (3/25/2011)
1. Is the end result the output from the select, or do you need those two columns updated in the table?2. Are the DayID_in values always going to be consecutive for an AccountID_in value?
1) they need to be updated in the table as these values will be used for various "rolled up" computations
2) Yes DayIDs and consecutive for each different accountid though the won't necessarily have the same date ranges
March 25, 2011 at 12:21 pm
So I got a little closer with the following code change. Added a second variable into the mix and it works except it gets day 2's "yesterday" value wrong for every account.
I'm clearly missing something on how it updates the variables in the context of the update statement. I tried reading Jeff's article again but don't see what I'm doing wrong.
Here's the new code
declare @DayID_in int=-1,
@AccountID_in int=-1,
@TodayMV float=0.0,
@YesterdayMV float=0.0,
@RunningTotalMV float=0.0,
@NewGroup_bt bit=0
create table #AccountClassInputs(
AccountID_in int NOT NULL,
DayID_in int not null,
MarketValue_fl float NOT NULL,
YesterdayMarketValue_fl float not null,
RunningTotalMV float not null
)
CREATE UNIQUE CLUSTERED INDEX [PK_AccountClassInputs] ON #AccountClassInputs
(
[AccountID_in] ASC,
[DayID_in] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
insert into #AccountClassInputs
values (1,1,10000,0,0)
insert into #AccountClassInputs
values (1,2,11000,0,0)
insert into #AccountClassInputs
values (1,3,12000,0,0)
insert into #AccountClassInputs
values (2,1,11000,0,0)
insert into #AccountClassInputs
values (2,2,12000,0,0)
insert into #AccountClassInputs
values (2,3,13000,0,0)
insert into #AccountClassInputs
values (2,4,14000,0,0)
insert into #AccountClassInputs
values (2,5,15000,0,0)
update #AccountClassInputs
set
@DayID_in=DayID_in, -- anchor column needed to insure consistency
@NewGroup_bt=CASE WHEN @AccountID_in<>AccountID_in THEN 1 ELSE 0 END,
@RunningTotalMV=CASE WHEN @NewGroup_bt=1 THEN MarketValue_fl ELSE @RunningTotalMV + MarketValue_fl END,
@TodayMV=CASE WHEN @NewGroup_bt=1 THEN 0 ELSE @YesterdayMV END,
@YesterdayMV=CASE WHEN @NewGroup_bt=1 THEN 0 ELSE MarketValue_fl END,
RunningTotalMV=@RunningTotalMV,
YesterdayMarketValue_fl= @TodayMV,
@AccountID_in=AccountID_in
FROM #AccountClassInputs OPTION (MAXDOP 1)
select * from #AccountClassInputs
drop table #AccountClassInputs
March 25, 2011 at 1:24 pm
GSquared (3/25/2011)
Honestly, for sequential code like this and other running totals problems, I avoid the quirky update and use CLR assemblies, or a Static or Fast Forward cursor. Those are simpler, more flexible solutions, and don't rely on lack of parallelism, et al, to complete correctly. And they're just as fast.
Ummm... are you sure? If I'm not mistaken, parallelism will destroy the serial nature of any running total code except for when "Triangular Joins" are involved.
I also beg to differ. A static or fast foward cursor is still going to be dreadfully slow compared to the Quirky Update. I also don't see how using a cursor is any simpler.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 28, 2011 at 9:54 pm
I am having a very similar problem. I'm attempting to use a Quirky update to populate the the "Previous" state of data, so I can identify changes over time. I'm using SQL Server 2008R2, and it looks like it is refusing to execute the variable assignment in the Quirky in the order they're laid out.
I can identify when I have a changed state. But I can't push the "Previous" value. Here is the code.
IF OBJECT_ID('tempdb..#AreaChanges') IS NOT NULL
DROP TABLE #AreaChanges
CREATE TABLE #AreaChanges
(MyIDINT IDENTITY(1,1),
[System_Id][int] NULL,
[System_Rev][int] NULL,
[CurrentArea][int] NULL,
[PreviousArea][int] NULL,
PrevAreaTest INt
, PrevAreaTest2 INt
--CONSTRAINT PK_MyIdentity_MyID
--PRIMARY KEY NONCLUSTERED (MyID)
) ON [PRIMARY]
CREATE CLUSTERED INDEX IXC_WorkItemRev ON #AreaChanges (System_ID ASC, System_Rev ASC)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (64339, 1, 1029)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (64339, 2, 1041)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (64339, 6, 1159)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (64339, 8, 1203)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (64339, 22, 1490)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67123, 1, 1083)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67124, 1, 1088)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67125, 1, 1083)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 1, 1085)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 2, 1085)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 3, 1085)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 5, 1029)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 6, 1085)
INSERT INTO #AreaChanges ([System_Id], [System_Rev], [CurrentArea]) VALUES (67265, 7, 1068)
/*-- DECLARE the Shuffle Variables for assigning values to PreviousArea. --*/
DECLARE @PreviousAreaINT
,@PreviousSystemIDINT
,@CurrentAreaINT
,@NewSystembit = 0
,@NewAreabit = 0
SELECT @PreviousArea = 999999 --(SELECT TOP 1 CurrentArea FROM #AreaChanges ORDER BY MyID)
SELECT @PreviousSystemID = 1
,@CurrentArea = 1
--UPDATE #AreaChanges
-- SET PreviousArea = -1
UPDATE #AreaChanges
SET
@CurrentArea= CurrentArea,
@NewSystem= (CASE WHEN @PreviousSystemID = System_ID THEN 0 ELSE 1 END),
@NewArea= (CASE WHEN @NewSystem = 1 THEN 1
WHEN @PreviousArea != [CurrentArea] THEN 1
ELSE 0 END),
PrevAreaTest= (CASE WHEN @NewSystem = 0 AND @NewArea = 1 THEN 1 ELSE 0 END),
PreviousArea= (CASE WHEN @NewSystem = 1 THEN CurrentArea
WHEN @NewArea = 1 THEN @PreviousArea ELSE @CurrentArea END ),
@PreviousArea= CurrentArea,
@PreviousSystemID = System_ID
, PrevAreaTest2 = @NewSystem
FROM #AreaChanges
WITH (TABLOCKX)
OPTION (MAXDOP 1)
SELECT * FROM #AreaChanges
--where CurrentArea != PreviousArea
As you can see from the results, I can't set the PreviousArea to anything other than the CurrentArea. The PreviousAreaTest is correctly identifying when there is a change, I just can't make it stick.
April 29, 2011 at 8:38 am
Since you're only getting a previous row's value, and not actually doing a running total, this should work:
UPDATE t1
SET PreviousArea = CASE WHEN t1.System_Id = t2.System_Id THEN t2.CurrentArea
ELSE t1.CurrentArea END
FROM #AreaChanges t1
LEFT JOIN #AreaChanges t2
ON t1.MyID = t2.MyID + 1;
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply