June 19, 2009 at 4:12 pm
Anyone out there, I need help writing a query, probably based on some type of looping structure where I need to consume transactions from a "bank" up to a certain amount. The rules are:
1. the amount to consume cannot exceed the overall balance (across time periods)
2. the amount to consume within a time period cannot exceed the balance for that time period, in this case per quarter.
3. the transactions have to be consumed in the order they come in (FIFO)
4. there can be negative transactions (as shown below)
5. there are only 2 time periods below. Assume there could be "n" periods.
Here is script and at the bottom is the expected output. The comment column is only there to explain. Let me know if you have any questions.
declare @balancequarter table
(
quarterid int
,amount int
)
declare @transaction table
(
transactionid int
,quarterid int
,amount int
)
declare @amounttoload int
set @amounttoload = 15
insert into @transaction select 1, 1, 5
insert into @transaction select 2, 1, 3
insert into @transaction select 3, 1, 7
insert into @transaction select 4, 1, -5
insert into @transaction select 5, 2, 10
insert into @transaction select 6, 2, 16
insert into @transaction select 7, 2, -6
insert into @balancequarter select quarterid, sum(amount) from @transaction group by quarterid
-- debug code
select * from @balancequarter
select * from @transaction
--expected output
transactionidquarteridoriginaltransactionamountusedtransactionamountcomment
1 1 5 5 'fully used'
2 1 3 3 'fully used'
3 1 7 2 'partial transaction - quarter balance reached'
5 2 10 5 'partial transaction - amount fully loaded'
I posted the entry below on the Newbiews forum but have not gotten any reply yet. I was hoping some "SQL master" among you could show me an elegant solution. Mine stinks. Thanks a lot for you help.
"Any fool can write code that a computer can understand. Good programmers write
code that humans can understand." -Martin Fowler et al, Refactoring: Improving the Design of Existing Code, 1999
June 20, 2009 at 6:19 am
First of all, thank you for the way you've provided the sample data! Good job! 🙂
I'm not a "SQL master" but I'll try to do my best... 😉
One thing is not clear to me though:
What's the business rule to decide whether the output is 'partial transaction - quarter balance reached' or 'partial transaction - amount fully loaded'?
Both data groups show the same pattern but are treated different...
The second questions goes to your current code: Would you mind show us what you've tried so far?
The problem itself looks like a job for RunningTotals...
Edit: "SQL master" disclosure added 😀
June 21, 2009 at 10:37 am
Joe Celko has written 2 article on implementing FIFO and LIFO with SQL.
http://www.dbazine.com/ofinterest/oi-articles/celko32
http://www.dbazine.com/ofinterest/oi-articles/celko33
SQL = Scarcely Qualifies as a Language
June 21, 2009 at 1:11 pm
Carl Federl (6/21/2009)
Joe Celko has written 2 article on implementing FIFO and LIFO with SQL.
Ack! Before you go using any of those methods, please read about the horrible effect that such joins as what he used will have on performance. They can make cursors look like little Saints. Here's the article on "Triangular Joins"...
http://www.sqlservercentral.com/articles/T-SQL/61539/
--Jeff Moden
Change is inevitable... Change for the better is not.
June 21, 2009 at 1:17 pm
lmu92 (6/20/2009)
The problem itself looks like a job for RunningTotals...
I agree... but, I'm gonna sit back and watch someone else do this one.
Edit: "SQL master" disclosure added 😀
Heh... you may have to edit to remove that disclosure. Go for it, Lutz! 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
June 21, 2009 at 3:33 pm
I'm just waiting for the clarification regarding the handling of 'partial transaction'.
Other than that, the code is ready to post.
Even if I might get it right I wouldn't consider myself a "SQL Master" compared to the level of knowledge at this forum... 😉
I may get there within the next couple ... ehhmmm ... decades?
June 21, 2009 at 9:33 pm
Jeff Moden (6/21/2009)
Carl Federl (6/21/2009)
Joe CelkoAck!
+1 ACK!
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
June 21, 2009 at 10:21 pm
I have some SQL similar to what you are trying to perform for Last In First Out inventory allocation, that may get you started.
The data has four Wharehouse and each Wharehouse has 200 products.
For shipments, there are 100 for each Wharehouse and Product combination for a total of 80,000 shipments.
For receipts, there is one receipt per day for each Wharehouse and Product combination with a total of 13,600 receipts. The receipt dates are the day before the shipment dates.
On a server class machine with 16 cores and 32Gb of Memory, the LIFO allocation runs in 13 seconds. On my laptop with SQL Server restricted to 64Mb of memory, the SQL runs in 19 seconds.
For the set-up, the order to run the SQL is indicated with a sequence prefix in the file name.
SQL = Scarcely Qualifies as a Language
June 22, 2009 at 12:11 am
Thanks to all of you for the replies. I had a busy father's day weekend but I will look at the post as soon as I can. Thanks again.
"Any fool can write code that a computer can understand. Good programmers write
code that humans can understand." -Martin Fowler et al, Refactoring: Improving the Design of Existing Code, 1999
June 22, 2009 at 12:32 am
Lutz,
I just wanted to clarify the terminology of the comments column, which purpose was simply informational and not to be used in the actual process.
> fully used: means that the original quantity of the transaction has been fully consumed, nothing is left to be taken from the existing units.
> partial transaction: means there are some units to be used on the original transaction but that since we have reached the balance for that time period, we have to move on to the next quarter.
> partial transaction: just like above , it means that there are some units to be used on the original transactions but since the total amount of "units" to be loaded has been fulfilled;
I am going to try to put some perspective on this problem by explaining a little more the business context. This system is a sales incentive systems, meaning I get large text files containing sales amount / quantity / invoice dates that match "promotions" put in place by our client. Products get rewarded differently based upon the product / date range / promotion type... After the matching process is done and the rewards are computed, the sales people are given an amount of "points" that goes into a "bank transaction" table. They can load the points on to their card (a form of credit card) up to the limit of the overall balance on the card. It would be easy if there was only positive transactions but their files contain returns, which lowers the overall balance but also lowers the balance for each time period, which cannot be negative.
I hope I did not confuse anyone more than I intended to. I will be checking the various links tonight and tomorrow and will give you some additional feedback on where I am at. Thanks again.
"Any fool can write code that a computer can understand. Good programmers write
code that humans can understand." -Martin Fowler et al, Refactoring: Improving the Design of Existing Code, 1999
June 22, 2009 at 1:23 am
Attached please find a solution based on
Jeff's article[/url].
I had to make some changes to your sample data:
1) I changed @transaction to a temp table #transaction in order to be able to add a clustered index
2) I added a clustered index to cover quarterid and transactionid of table #transaction. This is important to be able to use the grouped
running total since it is performed based on the clustered index, regardless of any group by clause in the update statement. For details
I strongly recommend to read the article mentioned above together with the discussion this article refers to. If your process can ensure that quarterid and transactionid both are sequential and you just have a clustered index just on transactionid you should be ok, too.
3) I added a column [runtotal] (int) to the transaction table to hold the result of the update statement
/*
create table #transaction
(
transactionid int
,quarterid int
,amount int
,runtotal int
)
insert into #transaction select 1, 1, 5,0
insert into #transaction select 2, 1, 3,0
insert into #transaction select 3, 1, 7,0
insert into #transaction select 4, 1, -5,0
insert into #transaction select 5, 2, 10,0
insert into #transaction select 6, 2, 16,0
insert into #transaction select 7, 2, -6,0
CREATE CLUSTERED INDEX IX_temptransaction_transactionid --clustered to resolve "Merry-go-Round"
ON #transaction (quarterid,transactionid)
create table #balancequarter
(
quarterid int
,amount int
)
insert into #balancequarter select quarterid, sum(amount) from
#transaction group by quarterid
*/
-- following code based om: Solving the "Running Total" & "Ordinal Rank" Problems in SS 2k/2k5
-- By Jeff Moden, 2008/01/31
-- http://www.sqlservercentral.com/articles/Advanced+Querying/61716/
DECLARE @PrevGrpBal INT --Running count resets when quarterid changes
SET @PrevGrpBal = 0
DECLARE @PrevQrtID INT --The "anchor" and "quarterid change detector"
SET @PrevQrtID = 0
-- using a single update based on a Clustered Index at VERY high speeds.
UPDATE #transaction
SET --===== Grouped Running Total (Reset when quarterid changes)
@PrevGrpBal = runtotal = CASE
WHEN quarterid = @PrevQrtID
THEN @PrevGrpBal + Amount
ELSE Amount -- Restarts total at "0 + current amount"
END,
--===== "Anchor" and provides for "account change detection"
@PrevQrtID = quarterid
FROM #transaction WITH (INDEX(IX_temptransaction_transactionid),TABLOCKX)
-- end source
SELECT t.*,
bq.amount AS max_amount,
CASE WHEN runtotal < bq.amount THEN t.amount ELSE t.amount + bq.amount
- runtotal END AS usedtransactionamount,
CASE WHEN runtotal 0
/* result set
transactionidquarteridamountruntotalmax_amountusedtransactionamountcomment
1155105fully used
2138103fully used
31715102partial transaction
5210102010fully used
6216262010partial transaction
*/
June 22, 2009 at 3:01 am
Very nice Lutz; I believe we should now call you a 'SQL Master' 😉
Well done.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
June 22, 2009 at 3:56 am
Carl Federl (6/21/2009)
I have some SQL similar to what you are trying to perform for Last In First Out inventory allocation, that may get you started.The data has four Wharehouse and each Wharehouse has 200 products.
For shipments, there are 100 for each Wharehouse and Product combination for a total of 80,000 shipments.
For receipts, there is one receipt per day for each Wharehouse and Product combination with a total of 13,600 receipts. The receipt dates are the day before the shipment dates.
On a server class machine with 16 cores and 32Gb of Memory, the LIFO allocation runs in 13 seconds. On my laptop with SQL Server restricted to 64Mb of memory, the SQL runs in 19 seconds.
For the set-up, the order to run the SQL is indicated with a sequence prefix in the file name.
Thanks, Carl... I'll take a look.
As a side bar, I agree that everyone should always check someone else's code before they run it but you should at least warn people when you're going to insert things into the MASTER database. There is absolutely no need to create a Tally table in Master. If you don't want to have that very useful tool in every database, you should make a Utililty database and then create a synonym in the other databases. If you're using an older version of SQL Server, a pass through view will suffice as a synonym.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 22, 2009 at 3:56 am
Paul White (6/22/2009)
Very nice Lutz; I believe we should now call you a 'SQL Master' 😉Well done.
Paul
Objection, Your Honor! One dove doesn't make a summer.;-)
But thanx for the flowers! 🙂
June 22, 2009 at 4:15 am
Jeff Moden (6/22/2009)
...but you should at least warn people when you're going to insert things into the MASTER database.
Absolutely. I ran the posted example earlier - after removing the references to master. I nearly posted a similar comment, but decided against it for some reason.
I didn't analyze the code in detail because the final query plan with its final four table scans, three hash joins, two hash aggregates and a sort kinda put me off.
The other nice touch in posted code is to ensure that any created objects are cleaned up afterward.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 15 posts - 1 through 15 (of 31 total)
You must be logged in to reply to this topic. Login to reply