September 11, 2012 at 1:27 pm
CELKO (9/11/2012)
The short answer is that this is a running total problem which can be done with the SUM () OVER (PARTITION BY ..ORDER BY ..
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
The better answer is that the DDL you attached is full of fundamental design errors. I also loved the leading commas, which I use as a symptom to diagnosis bad SQL. Your mindset is still stuck in punch cards and sequential magnetic tape files. Trust me; I have been at this for a few decades.
Why is your vague “<nothing in particular>_qty” a FLOAT? Is it the on hand quantity, the back order quantity, the sold quantity? A fifty character “<nothing in particular>_status”? Invite crap data like this and it will come. You have no keys, so this is not a table at all.
CREATE TABLE Stockroom
(item_name VARCHAR(25) NOT NULL),
trans_date DATE DEFAULT CURRENT_TIMESTAMP NOT NULL,
transaction_qty INTEGER NOT NULL
CHECK (onhand_qty >= 0));
The “id” and “seq” were obviously redundant, caused by an attempt to mimic a mag tape record number in in SQL. But so is the transaction type. Issuing something is a negative quantity; stocking it is positive. Use the sign of the quantity to put that fact into one column.
INSERT INTO Stockroom
VALUES
('ACTIFED', '2004-01-01', +100),
('ACTIFED', '2004-03-28', -5),
('ACTIFED', '2004-04-01', -5),
('ACTIFED', '2004-05-05', -5),
('ACTIFED', '2004-05-10', -5),
('AGIOLAX', '2004-01-01', +50),
('AGIOLAX', '2004-02-12', -10),
('AGIOLAX', '2004-03-28', +50),
('AGIOLAX', '2004-05-05', -25),
('OTRIVIN CHILD 0.5% NDrops', '2004-01-01', +200),
('OTRIVIN CHILD 0.5% NDrops', '2004-03-28', +50),
('OTRIVIN CHILD 0.5% NDrops', '2004-04-01', -15),
('OTRIVIN CHILD 0.5% NDrops', '2004-05-05', -10);
SELECT item_name, trans_date,
SUM (transaction_qty)
OVER (PARTITION BY item_name ORDER BY trans_date)
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
AS ohhand_qty
FROM Stock_Room;
Quite true Mr. Celko, IF (and the operative word here is IF) we were using SQL Server 2012. This is a SQL Server 2005 forum so I would suggest, sir, that keep your solutions to something that will work in the version of SQL Server that the individual is currently using. It might be nice to show someone what they are missing, but it doesn't help solve the current problem, now does it?
September 13, 2012 at 8:11 am
Actually, I am using 2012; but I'm not doing a simple running total calculation, I'm doing a weighted average cost calculation, and I have to keep track of three aggregate values that are inter-related but do not always change at the same time:
QtyOnHand, which is the running total of Qty; This changes on all TranTypes except Verifications (VTI, VC)
UnitCost, which is an average based on the equation ((Qty_1 * UnitCost_1)..+(Qty_n * UnitCost_n))/SUM(Qty) which changes with Verifications, and may also change on onther incoming TranTypes if the UnitCost is known (sometimes it is, sometimes it isn't)
ExtendedCost, which is the QtyOnHand * UnitCost (but remember that UnitCost is being calculated at the same time).
The code I posted above doesn't include my group by, as I'm still working on that one. My parts are grouped by:
Branch (store branch)
PartNo (part number)
SKey (a subset of the part number inventory that is tagged and put aside for an order); Inventory not currently tagged has an SKey of 0.
And everything is ordered by date.
The fun part comes in because even though the WAC (Weighted Average Cost) is calculated at the Branch level, sometimes SKeys include more than one branch's running totals - for example, Branch 12 may order parts from Branches 11 and 13 to fill an order, and the unit cost on parts from Branch 11 can be different from the unit cost for parts from Branch 13. So far, so good, but wait - there's more.
Just to make things interesting, the unit cost of the parts from Branch 13 may change AFTER they were sent, because their original unit cost had not been verified yet. (We receive parts on an ongoing basis from our vendors, and may not know about a price increase until we get the invoice at the end of the month). I'm still working on how to get this change into my calculations in the proper order.
September 13, 2012 at 8:25 am
rc_stone_1 (9/13/2012)
Actually, I am using 2012; ...
May I suggest that you post your question in a more appropriate forum in the future? You did post this in a SQL Server 2005 forum, so you are more likely to receive a solution that will work in SQL Server 2005. Using SQL Server 2012 opens doors to other options to help solve your problem that could be more efficient.
September 13, 2012 at 8:37 am
Been there, done that; Celko's solution works with a simple running total, but I wasn't able to make it work correctly with the WAC calculation unless I ran three separate times - once for QtyOnHand, once to calculate the Weighted Average of Unit Cost, and once to multiply those together to get the Extended Cost. I just wanted to express my thanks for giving me a more elegant solution.
September 17, 2012 at 10:19 pm
Jeff Moden (3/3/2009)
<REDACTED>Now, just in case anyone gets any smart ideas about using the clustered index in SELECTs to create an "order" without an ORDER BY... [font="Arial Black"]don't bloody well do it[/font]. Even though it looks like it works, there are times when it doesn't and I'll have the proof in the article I'm rewritting. For now, just take my word for it... the "orderless" ordering ONLY works in the "quirky" UPDATE and ONLY when it's properly formed with all the goodies like the "anchor" and forcing the clustered index scan.
<REDACTED>
I've gotta say it one more time... if you want a correct order out of a SELECT, you MUST use an ORDER BY. But, for the "quirky" UPDATE we did, you can quite literally "bank" on it. 😉
I am curious about something. I've also seen this "pseudo-ordering" resulting from use of a ROW_NUMBER()/ORDER BY construct in SELECTs. I hesitate to rely on it but it is seems pretty consistent yet does have some quirks, so I continue to use ORDER BY.
I'm wondering if you have some thoughts about it or whether you might cover it in the article you mentioned.
I've been meaning to test how it might work in a QU scenario but haven't gotten around to it.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply