February 19, 2009 at 1:49 am
I have a table as shown below and would like to have the balance quantity which should be dynamically calucalted depending upon receipt(as *1)/issued(as *-1)
ID CodeNameTxn_TypeDateQty dsiredOutput(BAL)
1 8100ACTIFEDReceipt 1-Jan-04100 100
2 8100ACTIFEDIssued 28-Mar-045 95
3 8100ACTIFEDIssued 1-Apr-045 90
4 8100ACTIFEDIssued 5-May-045 85
5 8100ACTIFEDIssued 10-May-045 80
6 8110AGIOLAXReceipt 1-Jan-0450 50
7 8110AGIOLAXIssued 12-Feb-0410 40
8 8110AGIOLAXReceipt 28-Mar-0450 90
9 8110AGIOLAXIssued 5-May-0425 65
10 8112OTRIVINReceipt 1-Jan-04200 200
11 8112OTRIVINReceipt 28-Mar-0450 250
12 8112OTRIVINIssued 1-Apr-0415 235
13 8112OTRIVINIssued 5-May-0410 225
PLEASE NOTE ID COLUMN is an IDENTITY column.
I am attaching a sample script.Please some help please.
February 19, 2009 at 2:44 am
Another RUNNING TOTAL problem.., See these articles on how to achieve this...
http://www.sqlservercentral.com/articles/Advanced+Querying/61716/
http://www.sqlteam.com/article/calculating-running-totals
--Ramesh
February 24, 2009 at 6:11 am
The problem with these sorts of calculations is that a row of data in a database is completely independant. It does not know or care about its neighbours and so trying to force a calculation like this is going to force the database engine into a realm it doesn't much care for.
Best solution is to work out the running total in the application that takes the data, be it reporting services, excel or whatever.
Failing that then this is one of those cases where, from a database perspective, a cursor will be the best solution.
There are other solutions which avoid cursor use, however they have their own draw backs and are not always very efficient.
Try running both the set based and cursor solutions with your own data and see which you prefer
(TIP: when using the cursor solution make sure you filter the select statement that forms the cursor declaration, not the select on the temp table used to create the output)
February 25, 2009 at 4:52 pm
How about this one?
with cte as
(
select *, Row_Number() OVER(PARTITION BY ID ORDER BY ID) as RowNum
from #TempStk
)
select a.rownum, a.id, a.status, a.date, a.quantity,
CASE WHEN a.Status <> 'Receipt' THEN SUM(case when b.Status = 'Receipt' Then b.quantity Else -b.quantity END) - a.quantity
ELSE ISNULL(SUM(case when b.Status = 'Receipt' Then b.quantity Else -b.quantity END),0) + a.quantity
END as Bal
from cte a
LEFT outer join cte b
on a.id = b.id and a.RowNum > B.RowNum
group by a.rownum, a.id, a.status, a.date, a.quantity
order by a.id, a.rownum
February 26, 2009 at 2:10 am
mh, your solution is fine but when you test it with a lot of data against a cursor, the cursor is orders of magnitude faster
I adapted your query to fit some different test data
[font="Courier New"]
drop table sales
CREATE TABLE Sales (DayCount smallint, channel varchar(50), Sales money)
CREATE CLUSTERED INDEX ndx_DayCount ON Sales(DayCount)
go
DECLARE @DayCount smallint, @Sales money, @channel varchar(50)
SET @DayCount = 0
SET @Sales = 10
set @channel = 'a'
WHILE @DayCount < 10000
BEGIN
INSERT Sales VALUES (@DayCount,@channel, @Sales)
SET @DayCount = @DayCount + 1
SET @Sales = @Sales + 15
set @channel = case
when right(cast(@daycount as varchar), 1) = 0 then 'a'
when right(cast(@daycount as varchar), 1) = 1 then 'b'
when right(cast(@daycount as varchar), 1) = 2 then 'c'
when right(cast(@daycount as varchar), 1) = 3 then 'd'
when right(cast(@daycount as varchar), 1) = 4 then 'e'
when right(cast(@daycount as varchar), 1) = 5 then 'f'
when right(cast(@daycount as varchar), 1) = 6 then 'g'
when right(cast(@daycount as varchar), 1) = 7 then 'h'
when right(cast(@daycount as varchar), 1) = 8 then 'i'
when right(cast(@daycount as varchar), 1) = 9 then 'j'
end
END
select * from sales
with cte as
(
select *, Row_Number() OVER(PARTITION BY channel ORDER BY channel) as RowNum
from sales
)
select a.rownum, a.daycount,a.channel, a.sales,
SUM(b.sales ) - a.sales as Bal
from cte a
LEFT outer join cte b
on a.channel = b.channel and a.RowNum > B.RowNum
group by a.rownum, a.channel, a.daycount, a.sales
order by a.channel, a.daycount
CREATE TABLE #Sales (DayCount smallint, Channel varchar(50), Sales money, RunningTotal money)
DECLARE @RunningTotal money
declare @old_channel varchar(50)
SET @RunningTotal = 0
set @old_channel = ''
DECLARE rt_cursor CURSOR
FOR
SELECT DayCount, Channel, Sales
FROM Sales
order by channel, daycount
DECLARE @DayCount smallint, @Sales money, @channel varchar(50)
OPEN rt_cursor
FETCH NEXT FROM rt_cursor INTO @DayCount,@Channel, @Sales
WHILE @@FETCH_STATUS = 0
BEGIN
SET @RunningTotal = @RunningTotal + @Sales
INSERT #Sales VALUES (@DayCount,@channel, @Sales, @RunningTotal)
set @old_channel = @channel
FETCH NEXT FROM rt_cursor INTO @DayCount,@channel, @Sales
if @old_channel <> @channel set @runningtotal = 0
END
CLOSE rt_cursor
DEALLOCATE rt_cursor
SELECT * FROM #Sales ORDER BY channel,DayCount
DROP TABLE #Sales[/font]
March 3, 2009 at 9:39 am
Hmmm... there are very few scenarios where I've seen cursors faster than CTE or non-RBAR (row by agonizing row) methods. Just curious how many rows you're running through?
Cheers,
Brian
March 3, 2009 at 8:19 pm
Samuel Vella (2/24/2009)
The problem with these sorts of calculations is that a row of data in a database is completely independant. It does not know or care about its neighbours and so trying to force a calculation like this is going to force the database engine into a realm it doesn't much care for.
Actually.... totally untrue if you know how updates with pseudo-cursors actually work... provided a correctly defined clustered index scan comes into play, the order of the update will ALWAYS be in the correct column order described by the clustered index no matter how badly fragmented or split it may be. Despite all the expert opinions against it, not one of them has been able to break a correctly formed update on a running total problem and it's worked since SQL Server was known as "Sybase" and it still works in all versions of SQL Server including 2k8. In fact, they even have a special version of update and, yes, that special version is documented in Books Online. Code comin' up for this problem in a minute... 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2009 at 8:32 pm
Samuel Vella (2/26/2009)
mh, your solution is fine but when you test it with a lot of data against a cursor, the cursor is orders of magnitude fasterI adapted your query to fit some different test data
[font="Courier New"]
drop table sales
CREATE TABLE Sales (DayCount smallint, channel varchar(50), Sales money)
CREATE CLUSTERED INDEX ndx_DayCount ON Sales(DayCount)
go
DECLARE @DayCount smallint, @Sales money, @channel varchar(50)
SET @DayCount = 0
SET @Sales = 10
set @channel = 'a'
WHILE @DayCount < 10000
BEGIN
INSERT Sales VALUES (@DayCount,@channel, @Sales)
SET @DayCount = @DayCount + 1
SET @Sales = @Sales + 15
set @channel = case
when right(cast(@daycount as varchar), 1) = 0 then 'a'
when right(cast(@daycount as varchar), 1) = 1 then 'b'
when right(cast(@daycount as varchar), 1) = 2 then 'c'
when right(cast(@daycount as varchar), 1) = 3 then 'd'
when right(cast(@daycount as varchar), 1) = 4 then 'e'
when right(cast(@daycount as varchar), 1) = 5 then 'f'
when right(cast(@daycount as varchar), 1) = 6 then 'g'
when right(cast(@daycount as varchar), 1) = 7 then 'h'
when right(cast(@daycount as varchar), 1) = 8 then 'i'
when right(cast(@daycount as varchar), 1) = 9 then 'j'
end
END
select * from sales
with cte as
(
select *, Row_Number() OVER(PARTITION BY channel ORDER BY channel) as RowNum
from sales
)
select a.rownum, a.daycount,a.channel, a.sales,
SUM(b.sales ) - a.sales as Bal
from cte a
LEFT outer join cte b
on a.channel = b.channel and a.RowNum > B.RowNum
group by a.rownum, a.channel, a.daycount, a.sales
order by a.channel, a.daycount
CREATE TABLE #Sales (DayCount smallint, Channel varchar(50), Sales money, RunningTotal money)
DECLARE @RunningTotal money
declare @old_channel varchar(50)
SET @RunningTotal = 0
set @old_channel = ''
DECLARE rt_cursor CURSOR
FOR
SELECT DayCount, Channel, Sales
FROM Sales
order by channel, daycount
DECLARE @DayCount smallint, @Sales money, @channel varchar(50)
OPEN rt_cursor
FETCH NEXT FROM rt_cursor INTO @DayCount,@Channel, @Sales
WHILE @@FETCH_STATUS = 0
BEGIN
SET @RunningTotal = @RunningTotal + @Sales
INSERT #Sales VALUES (@DayCount,@channel, @Sales, @RunningTotal)
set @old_channel = @channel
FETCH NEXT FROM rt_cursor INTO @DayCount,@channel, @Sales
if @old_channel <> @channel set @runningtotal = 0
END
CLOSE rt_cursor
DEALLOCATE rt_cursor
SELECT * FROM #Sales ORDER BY channel,DayCount
DROP TABLE #Sales[/font]
Just an FYI, your cursor routine is going to be blown out of the water shortly. I don't know if Jeff's article has been rewritten yet, but if you want to see the type of code he is putting together I have another article on Running Totals that you may want to check out, as it does contain some of Jeff's code in it as well. You can read my article here[/url].
I have a link to Jeff's article below in my signature block regarding Running Totals.
March 3, 2009 at 8:53 pm
Oh, wow. Looks like I may have beat Jeff. But of course, he is probably setting up a million row example to demonstrate his code, where I just took what was presented and went from there. Jeff's code will also be heavily commented as well.
--drop table #TempStk
CREATE TABLE #TempStk
( [SEQ] INT ,
[ID] INT NOT NULL
,[Pricelist] nvarchar(300)
,[Status] NVARCHAR(50) NULL
,[Date] DATETIME
,[Quantity] INT
,[Balance] INT NULL
)
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(1,8100,'ACTIFED','Receipt','01/01/2004',100);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(2,8100,'ACTIFED','Issued','03/28/2004',5);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(3,8100,'ACTIFED','Issued','04/01/2004',5);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(4,8100,'ACTIFED','Issued','05/05/2004',5);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(5,8100,'ACTIFED','Issued','05/10/2004',5);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(6,8110,'AGIOLAX','Receipt','01/01/2004',50);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(7,8110,'AGIOLAX','Issued','02/12/2004',10);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(8,8110,'AGIOLAX','Receipt','03/28/2004',50);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(9,8110,'AGIOLAX','Issued','05/05/2004',25);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(10,8112,'OTRIVIN CHILD 0.5% NDrops','Receipt','01/01/2004',200);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(11,8112,'OTRIVIN CHILD 0.5% NDrops','Receipt','03/28/2004',50);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(12,8112,'OTRIVIN CHILD 0.5% NDrops','Issued','04/01/2004',15);
GO
INSERT INTO #TempStk
( [SEQ] ,
[ID]
,[Pricelist]
,[Status]
,[Date]
,[Quantity]
)
values
(13,8112,'OTRIVIN CHILD 0.5% NDrops','Issued','05/05/2004',10);
GO
select * from #TempStk
create clustered index IX_RunningBal on #TempStk (
ID,
Date);
declare @Balance int,
@ID int;
set @ID = 0;
update #TempStk set
@Balance = [Balance] = case when @ID <> ts.ID
then 0
else @Balance
end + (cast(case when [Status] = 'Receipt'
then 1
when [Status] = 'Issued'
then -1
end as int) * ts.[Quantity]),
@ID = ts.[ID]
from
#TempStk ts with (index = 1)
select * from #TempStk;
March 3, 2009 at 9:14 pm
Lynn Pettis (3/3/2009)
Oh, wow. Looks like I may have beat Jeff. But of course, he is probably setting up a million row example to demonstrate his code, where I just took what was presented and went from there. Jeff's code will also be heavily commented as well.
Heh... you know me all to well, Brother Lynn! 😀 Although Mathew did a very good job of attaching sample data, the column names are quite a bit different than those posted and I also wanted to show a shorter way to post sample data. And, as you so very well put it, I was playing with my typical million row example and then saw you beat me to answering this. The only thing you forgot was the TABLOCKX to keep people out of the table while the update is occuring. 🙂
Anyway, here's the short, heavily commented version... do read the comments as they make a point about this method of updating...
--===== Conditional drop the temporary test table (just so we can rerun the test, if we want)
IF OBJECT_ID('TempDB..#TempStk','U') IS NOT NULL
DROP TABLE #TempStk
--===== Creat the temporary test table with a clustered PK on Seq
CREATE TABLE #TempStk
(
ID INT PRIMARY KEY CLUSTERED, --Can be an IDENTITY, as well
Code INT,
Name NVARCHAR(300),
Txn_Type NVARCHAR(50),
Date DATETIME,
Quantity FLOAT,
Balance FLOAT
--===== Populate the test table with data. Notice the data is pretty much out of order so
-- far as the clustered PK is concerned. If the ID can't be maintained in the correct
-- order, not to worry... put the clustered index on Code, Name, and Date. Notice I
-- said nothing about that having to be a PK. ;-)
INSERT INTO #TempStk
(ID, Code, Name, Txn_Type, Date, Quantity)
SELECT '13','8112','OTRIVIN CHILD 0.5% NDrops','Issued','May 5 2004 12:00AM','10' UNION ALL
SELECT '6','8110','AGIOLAX','Receipt','Jan 1 2004 12:00AM','50' UNION ALL
SELECT '1','8100','ACTIFED','Receipt','Jan 1 2004 12:00AM','100' UNION ALL
SELECT '3','8100','ACTIFED','Issued','Apr 1 2004 12:00AM','5' UNION ALL
SELECT '4','8100','ACTIFED','Issued','May 5 2004 12:00AM','5' UNION ALL
SELECT '10','8112','OTRIVIN CHILD 0.5% NDrops','Receipt','Jan 1 2004 12:00AM','200' UNION ALL
SELECT '5','8100','ACTIFED','Issued','May 10 2004 12:00AM','5' UNION ALL
SELECT '8','8110','AGIOLAX','Receipt','Mar 28 2004 12:00AM','50' UNION ALL
SELECT '9','8110','AGIOLAX','Issued','May 5 2004 12:00AM','25' UNION ALL
SELECT '11','8112','OTRIVIN CHILD 0.5% NDrops','Receipt','Mar 28 2004 12:00AM','50' UNION ALL
SELECT '7','8110','AGIOLAX','Issued','Feb 12 2004 12:00AM','10' UNION ALL
SELECT '12','8112','OTRIVIN CHILD 0.5% NDrops','Issued','Apr 1 2004 12:00AM','15' UNION ALL
SELECT '2','8100','ACTIFED','Issued','Mar 28 2004 12:00AM','5'
--===== Declare some obviously named variables
DECLARE @PrevID INT,
@PrevCode INT,
@PrevName NVARCHAR(300),
@PrevBal FLOAT
--===== Do the "quirky" update using a very high speed pseudo-cursor,
-- This is very similar to what you would do in a language like "C" except the
-- "Read a row/Write a row" is built into the update.
UPDATE #TempStk
SET @PrevBal = Balance = CASE
WHEN Code = @PrevCode
AND Name = @PrevName
THEN @PrevBal + (Quantity * CASE WHEN Txn_Type = 'Issued' THEN -1 ELSE 1 END)
ELSE Quantity * CASE WHEN Txn_Type = 'Issued' THEN -1 ELSE 1 END
END,
@PrevCode = Code,
@PrevName = Name,
@PrevID = ID --Just an "anchor", but gotta have it to guarantee things.
FROM #TempStk WITH (INDEX(0), TABLOCKX)
--===== Display the results
SELECT *
FROM #TempStk
ORDER BY ID
Now, just in case anyone gets any smart ideas about using the clustered index in SELECTs to create an "order" without an ORDER BY... [font="Arial Black"]don't bloody well do it[/font]. Even though it looks like it works, there are times when it doesn't and I'll have the proof in the article I'm rewritting. For now, just take my word for it... the "orderless" ordering ONLY works in the "quirky" UPDATE and ONLY when it's properly formed with all the goodies like the "anchor" and forcing the clustered index scan.
The TABLOCKX is to keep people from pulling off an update or delete of the data we're trying to make the running total for. You might think that's a bad thing, but consider this... this method will update a million rows in less than 7 seconds and because it's gonna do the whole table, guess what it's eventually gonna do during those 7 seconds... you guessed it... it's going to lock the whole table, so just get it out of the way and do it up front.
I've gotta say it one more time... if you want a correct order out of a SELECT, you MUST use an ORDER BY. But, for the "quirky" UPDATE we did, you can quite literally "bank" on it. 😉
If anyone wants to see the million row update to make a running total, you'll just have to wait until I can publish the rewritten article. :hehe:
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2009 at 9:20 pm
beezell (3/3/2009)
Hmmm... there are very few scenarios where I've seen cursors faster than CTE or non-RBAR (row by agonizing row) methods. Just curious how many rows you're running through?Cheers,
Brian
That's because what looked like set based code is actually RBAR on steroids known as a "Triangular Join" and it will cripple a server on very low rowcounts even as low as 20,000 rows. Read all about in the following link...
http://www.sqlservercentral.com/articles/T-SQL/61539/
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2009 at 9:26 pm
Jeff Moden (3/3/2009)
Lynn Pettis (3/3/2009)
Oh, wow. Looks like I may have beat Jeff. But of course, he is probably setting up a million row example to demonstrate his code, where I just took what was presented and went from there. Jeff's code will also be heavily commented as well.Heh... you know me all to well, Brother Lynn! 😀 Although Mathew did a very good job of attaching sample data, the column names are quite a bit different than those posted and I also wanted to show a shorter way to post sample data. And, as you so very well put it, I was playing with my typical million row example and then saw you beat me to answering this. The only thing you forgot was the TABLOCKX to keep people out of the table while the update is occuring. 🙂
If anyone wants to see the million row update to make a running total, you'll just have to wait until I can publish the rewritten article. :hehe:
You can also find the same code in my article[/url], but it lacks all the detail behind it, as my work was based on Jeff's. (Sorry, shameless plugs.) :hehe:
March 3, 2009 at 9:56 pm
Samuel Vella (2/26/2009)
mh, your solution is fine but when you test it with a lot of data against a cursor, the cursor is orders of magnitude faster
Man, you have absolutely the correct idea... you put your money where your mouth was with code! Well done!
Just to share a few tricks with someone who actually tests code for performance, here's your test code generator...
drop table sales
CREATE TABLE Sales (DayCount smallint, channel varchar(50), Sales money)
CREATE CLUSTERED INDEX ndx_DayCount ON Sales(DayCount)
go
SET NOCOUNT ON
DECLARE @DayCount smallint, @Sales money, @channel varchar(50)
SET @DayCount = 0
SET @Sales = 10
set @channel = 'a'
WHILE @DayCount < 10000
BEGIN
INSERT Sales VALUES (@DayCount,@channel, @Sales)
SET @DayCount = @DayCount + 1
SET @Sales = @Sales + 15
set @channel = case
when right(cast(@daycount as varchar), 1) = 0 then 'a'
when right(cast(@daycount as varchar), 1) = 1 then 'b'
when right(cast(@daycount as varchar), 1) = 2 then 'c'
when right(cast(@daycount as varchar), 1) = 3 then 'd'
when right(cast(@daycount as varchar), 1) = 4 then 'e'
when right(cast(@daycount as varchar), 1) = 5 then 'f'
when right(cast(@daycount as varchar), 1) = 6 then 'g'
when right(cast(@daycount as varchar), 1) = 7 then 'h'
when right(cast(@daycount as varchar), 1) = 8 then 'i'
when right(cast(@daycount as varchar), 1) = 9 then 'j'
end
END
It does a fine job of making 10,000 rows in somewhere just over 3 seconds on my 6 year old box... but, what if you wanted a million rows of that same example just to make sure? The following code makes the same kind of data example, but it makes 100 times more rows in only about 4-5 seconds more...
--===== Conditionaly drop the test table
IF OBJECT_ID('TempDB..#Sales','U') IS NOT NULL
DROP TABLE #Sales
--===== Populate the table with a million rows of data similar to yours.
-- This ISNULL is to make the resulting DayCount column NOT NULL so
-- we can put a primary key on it later.
SELECT TOP 1000000
ISNULL(CAST(ROW_NUMBER() OVER (ORDER BY sc1.ID) - 1 AS INT),0) AS DayCount,
CAST(ROW_NUMBER() OVER (ORDER BY sc1.ID) *15 AS MONEY) AS Sales,
CHAR(CAST(ROW_NUMBER() OVER (ORDER BY sc1.ID) - 1 AS INT)%10+ASCII('a')) AS Channel
INTO dbo.#Sales
FROM Master.sys.SysColumns sc1
CROSS JOIN Master.sys.SysColumns sc2
--===== Add the primary key
-- I don't name these on temp tables because they must be unique.
ALTER TABLE dbo.#Sales
ADD PRIMARY KEY CLUSTERED (DayCount)
Like I said, well done on the testing... now you have a way to do some really heavy duty testing that no one will be able to argue with.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2009 at 9:57 pm
Lynn Pettis (3/3/2009)
Jeff Moden (3/3/2009)
Lynn Pettis (3/3/2009)
Oh, wow. Looks like I may have beat Jeff. But of course, he is probably setting up a million row example to demonstrate his code, where I just took what was presented and went from there. Jeff's code will also be heavily commented as well.Heh... you know me all to well, Brother Lynn! 😀 Although Mathew did a very good job of attaching sample data, the column names are quite a bit different than those posted and I also wanted to show a shorter way to post sample data. And, as you so very well put it, I was playing with my typical million row example and then saw you beat me to answering this. The only thing you forgot was the TABLOCKX to keep people out of the table while the update is occuring. 🙂
If anyone wants to see the million row update to make a running total, you'll just have to wait until I can publish the rewritten article. :hehe:
You can also find the same code in my article[/url], but it lacks all the detail behind it, as my work was based on Jeff's. (Sorry, shameless plugs.) :hehe:
Shameless or not, it's a great article... people should go take a look!
--Jeff Moden
Change is inevitable... Change for the better is not.
September 10, 2012 at 2:56 pm
Hey, just wanted to thank you guys for this thread. I have been working on a weighted average cost calculation for two or three days, and stumbled across this link. I've been trying to avoid doing a cursor calculation as I have over 300,000 different inventory items I have to walk through (over 73 million rows total). As you can see from the comments I left in, I shamelessly copied Jeff's code and modified it to fit my needs. Thanks a million.
Randy Stone
CREATE TABLE #WACTran(
[TranOrder]INT PRIMARY KEY CLUSTERED,
[TranType]VARCHAR(3) NULL,
[Qty]INT NULL,
[UnitCost]DECIMAL(14, 4) NULL,
[ExtendedCost]DECIMAL(14, 4) NULL,
[QtyOnHand]INT
)
INSERT INTO [#WACTran](TranOrder, TranType, Qty, UnitCost, ExtendedCost)
VALUES
(0,'BB',100,1.5000,150.0000 )
,(1,'S',-50,0.0000,0.0000 )
,(2,'RCT',15,0.0000,0.0000 )
,(3,'S',-5,0.0000,0.0000 )
,(4,'VC',15,1.6000,0.0000 )
,(5,'S',-5,0.0000,0.0000 )
,(6,'TO',-4,0.0000,0.0000 )
,(7,'TI',2,0.0000,0.0000 )
,(8,'VTI',2,1.6500,0.0000 )
,(9,'RCT',10,0.0000,0.0000 )
,(10,'S',-5,0.0000,0.0000 )
,(11,'VC',10,1.7500,0.0000 )
,(12,'S',-5,0.0000,0.0000 )
,(13,'TO',-4,0.0000,0.0000 )
,(14,'TI',2,0.0000,0.0000 );
--===== Declare some obviously named variables
DECLARE @PrevTranOrderINT,
@PrevQtyOnHandINT,
@PrevUnitCostDECIMAL(14,4),
@PrevExtendedCostDECIMAL(14,4)
--===== Do the "quirky" update using a very high speed pseudo-cursor,
-- This is very similar to what you would do in a language like "C" except the
-- "Read a row/Write a row" is built into the update.
UPDATE #WACTran
SET @PrevQtyOnHand = QtyOnHand = CASE
WHEN TranType = 'BB'
THEN Qty
WHEN TranType IN ('VTI','VC')
THEN @PrevQtyOnHand
ELSE @PrevQtyOnHand + Qty
END,
@PrevUnitCost = UnitCost = CASE
WHEN TranType = 'BB'
THEN UnitCost
WHEN TranType IN ('VTI','VC')
THEN (((@PrevQtyOnHand - Qty)*@PrevUnitCost)+(Qty*UnitCost))/@PrevQtyOnHand
ELSE @PrevUnitCost
END,
@PrevExtendedCost = ExtendedCost = CASE
WHEN TranType = 'BB'
THEN ExtendedCost
ELSE @PrevUnitCost*@PrevQtyOnHand
END,
@PrevTranOrder = TranOrder --Just an "anchor", but gotta have it to guarantee things.
FROM [#WACTran] WITH (INDEX(0), TABLOCKX)
--===== Display the results
SELECT *
FROM [#WACTran]
ORDER BY TranOrder
DROP TABLE #WACTran;
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply