July 3, 2010 at 12:08 pm
jyurich (7/3/2010)
A) Create a Memory Based Temp Table to Store Records to be Processed (e.g. DECLARE @Records TABLE (RecordID int) (This avoids the tempdb from coming into play)
You bring up a very interesting point. There are some who might argue that anyone who has been a SQL DBA for any "decent length of time" would know that declaring a table variable *does not* guarantee SQL Server will avoid tempdb, as you state here.
Does the fact that this is well-documented and there are a large # of people who already know this mean that I've just wasted everyone else's time by revisiting this concept here?
It's very hard to make assumptions about what everyone does or does not know already. Also not everyone who reads these articles has been a SQL DBA for a "decent length of time" yet.
Mike C
July 3, 2010 at 4:19 pm
Yes, you are correct and I thank you for clarifying that point. However, typically when I'm inserting PK int values only into a Memory based temp table, I do not have to worry about the tempdb coming into play, as the amount of data being stored in memory is minimal.
This is why I stated that it's all in the design and implementation of cursors you are using that dictate performance. Trust me, I avoid cursors whenever possible. But there are times when they are simply unavoidable.
July 3, 2010 at 4:38 pm
There are times when I recommend a T-SQL cursor over the alternatives, but they are few and far between. The issue though seems to be that there's a feeling by some that "I know this stuff already so everyone should know it." The only problem with that argument is that not everybody knows it.
I believe Barry's first installment in this series (please correct me if I'm wrong here Barry) was written as a general introduction to the problems associated with T-SQL RBAR processing. When you write general introductory material you can't assume all readers have n years experience and knowledge.
Mike C.
July 4, 2010 at 10:50 am
jyurich (7/3/2010)
Yes, you are correct and I thank you for clarifying that point. However, typically when I'm inserting PK int values only into a Memory based temp table, I do not have to worry about the tempdb coming into play, as the amount of data being stored in memory is minimal.This is why I stated that it's all in the design and implementation of cursors you are using that dictate performance. Trust me, I avoid cursors whenever possible. But there are times when they are simply unavoidable.
Heh... that's part of the point. This is no way that you can guarantee that data is being stored in a "memory only" table even if the data is "minimal" and justifying a loop based on such an assumption will eventually bite you.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 4, 2010 at 10:55 am
Mike C (7/3/2010)
There are times when I recommend a T-SQL cursor over the alternatives, but they are few and far between. The issue though seems to be that there's a feeling by some that "I know this stuff already so everyone should know it." The only problem with that argument is that not everybody knows it.I believe Barry's first installment in this series (please correct me if I'm wrong here Barry) was written as a general introduction to the problems associated with T-SQL RBAR processing. When you write general introductory material you can't assume all readers have n years experience and knowledge.
Mike C.
AGREED!!! I, for one, very much appreciated Barry's ground-floor, stupid-simple examples because a lot of the people I know in the business still don't understand even after being in the business for years. The examples Barry used can be related to by both neophytes and alumni. The really ironic part is that I've seen supposed alumni write such "silly" examples for production code on this very forum so they're really not all that "stupid-simple"!
--Jeff Moden
Change is inevitable... Change for the better is not.
July 5, 2010 at 10:17 am
Although I'd like to hear reaction of author of article, nontheless thanks for reply.
Mike C (7/3/2010)
Well to start off, I would suggest that many authors often start off by introducing basic concepts and simple examples, especially when they're writing a series of articles like this.
I do not argue with necessity of simple examples and step-by-step approach to subject. But
1. Author should go beyond these simple exapmples to more complicated sooner or later in article.
2. I suppose author understands the difference between simple exapmle and stupid example and even provocative example. These "real-life" exapmles can only demonstrate stupidity of so-named "programmers" while author leads us to conclusion that problem is cursors themselves, not stupidity or non-acquaintance with language.
Mike C (7/3/2010)
Extended stored procedures are deprecated since SQL 2005, and I wouldn't recommend using them for a couple of very good reasons.
It would be great to voice both of this couple. My own opinion is: the reasons are necessity to promote .NET instead of DLL-s and COM. And the reason that ESP deprecated only and not prohibited at all is tons of ESP code written all over the world. Don't doubt, I know about "dll hell", about glaring mistakes that programmers can do in native programming rather than .NET and so on. But it's a fact that virtual machine (kind of .NET or Java VM) needs resources, needs JIT compiling and all other things. Consequantally VM can not be faster than native (in general case, of course. I admit that VM can optimize some really ugly things we are stupid programmers can do :-)).
Mike C (7/3/2010)
The current issue with calculating running sums and moving averages in T-SQL is the lack of support for the full OVER clause with aggregate functions. When T-SQL catches up to the standard in that regard set-based running sums and moving averages, etc., may meet with your approval.
Maybe... But now it's kind a "happines in the future".
Mike C (7/3/2010)
There has been a lot of testing around this very subject, and the general result is that procedural code does work best in optimized procedural languages. A well-written SQL CLR stored procedure does in fact outrun an equivalent cursor in the tests I've seen. Adam Machanic has performed some testing in this area, you might check out his blog.
Thanks for link. But there are some problems with .NET integration.
1. You have to learn new language (C#)
2. You have to learn new platform (.NET)
3. You have to learn new object model
I think no need to discuss that YOU CAN NOT write any procedure in any language WELL after two-day or two-week or (maybee) even two-month learning. And you'll have to write much more code in C# than in TSQL with cursors. You'll have to solve problems with distribution of all these assemblies - all problems connected with integration. As for me, I would prefer always stay in frames of one language and one environment. May be it will come true when there is no TSQL at all but only C# and LINK...
Mike C (7/3/2010)
Another solution to this specific problem might be to perform the calculation in your data flow if you are loading the data via an ETL tool.Mike C
Thanks, it's reason to get aqcuainted with ETL 🙂
July 5, 2010 at 12:59 pm
slavut (7/5/2010)
Although I'd like to hear reaction of author of article, nontheless thanks for reply.Mike C (7/3/2010)
Well to start off, I would suggest that many authors often start off by introducing basic concepts and simple examples, especially when they're writing a series of articles like this.I do not argue with necessity of simple examples and step-by-step approach to subject. But
1. Author should go beyond these simple exapmples to more complicated sooner or later in article.
2. I suppose author understands the difference between simple exapmle and stupid example and even provocative example. These "real-life" exapmles can only demonstrate stupidity of so-named "programmers" while author leads us to conclusion that problem is cursors themselves, not stupidity or non-acquaintance with language.
Hay slavut, no problem. I can't speak for the author, but this being a series my expectation is that he plans to introduce more complex examples in later installments.
Mike C (7/3/2010)
Extended stored procedures are deprecated since SQL 2005, and I wouldn't recommend using them for a couple of very good reasons.It would be great to voice both of this couple. My own opinion is: the reasons are necessity to promote .NET instead of DLL-s and COM. And the reason that ESP deprecated only and not prohibited at all is tons of ESP code written all over the world. Don't doubt, I know about "dll hell", about glaring mistakes that programmers can do in native programming rather than .NET and so on. But it's a fact that virtual machine (kind of .NET or Java VM) needs resources, needs JIT compiling and all other things. Consequantally VM can not be faster than native (in general case, of course. I admit that VM can optimize some really ugly things we are stupid programmers can do :-)).
Here are a few reasons why XPs are deprecated:
1) Extended stored procedures run as unmanaged code in the SQL Server process space. They can very easily corrupt the SQL Server memory space. In addition to the annoyance of having to restart the SQL Server service when the memory space gets corrupted, this poses a potential security risk since process/memory space corruption often leads to backdoors into the system.
2) The extended stored procedure API was never well documented. There are a lot of potential pitfalls to trip up programmers on their first few attempts at writing extended stored procedures, and they won't know about them until after they've attempted to write one.
3) Because extended stored procedures are unmanaged, you are completely responsible for managing the resources your XP uses. Poorly written XPs have caused a lot of memory leaks over the years, many of them severe (i.e., requiring regular reboots to clear them up).
As you point out, .NET will cost you a bit of efficiency but even with that small efficiency hit SQL CLR has proven to be more efficient at row-by-row procedural processing than T-SQL cursors in many cases. If the small efficiency hit you take with .NET is a concern, I'd think the far worse hit you take with a T-SQL cursor would be a bigger concern.
Mike C (7/3/2010)
The current issue with calculating running sums and moving averages in T-SQL is the lack of support for the full OVER clause with aggregate functions. When T-SQL catches up to the standard in that regard set-based running sums and moving averages, etc., may meet with your approval.Maybe... But now it's kind a "happines in the future".
Absolutely. T-SQL needs to catch up to the standard in this regard. There are several Connect items posted by Itzik Ben-Gan that address this lack of functionality--you might consider voting for them. When it does finally make it's way into T-SQL you will be able to use Windowing functions in your set-based code that allow you to perform substantial calculations without having to rescan tables to perform subquery calcs.
Mike C (7/3/2010)
There has been a lot of testing around this very subject, and the general result is that procedural code does work best in optimized procedural languages. A well-written SQL CLR stored procedure does in fact outrun an equivalent cursor in the tests I've seen. Adam Machanic has performed some testing in this area, you might check out his blog.Thanks for link. But there are some problems with .NET integration.
1. You have to learn new language (C#)
2. You have to learn new platform (.NET)
3. You have to learn new object model
I think no need to discuss that YOU CAN NOT write any procedure in any language WELL after two-day or two-week or (maybee) even two-month learning. And you'll have to write much more code in C# than in TSQL with cursors. You'll have to solve problems with distribution of all these assemblies - all problems connected with integration. As for me, I would prefer always stay in frames of one language and one environment. May be it will come true when there is no TSQL at all but only C# and LINK...
1. If you're writing extended stored procs already then you already know C or C++. If you know C or C++, C# is not a big learning curve. In fact a lot of the base C# functionality is similar to the MFC and STL library functionality that C++ programmers are familiar with (well, except that it's been made easier to use).
2. The .NET platform is well documented, the extended stored proc API is not so well documented. Learning the intricacies of the deprecated extended stored proc API is going to be much tougher than learning the well-documented .NET classes.
3. See #2. Unless you're infusing your XPs with the goodness that is COM (we won't even address how much fun learning COM from scratch is) or something then you're not particularly working with a well-defined "object model" to begin with. Assuming someone doesn't know COM already, then they would be learning their first object model with .NET -- which, for my money, is a much simpler object model than COM.
Mike C (7/3/2010)
Another solution to this specific problem might be to perform the calculation in your data flow if you are loading the data via an ETL tool.Mike C
Thanks, it's reason to get aqcuainted with ETL 🙂
That reminds me -- another solution would be to use a tool that's built specifically for these types of aggregations and calculations--something like SSAS.
Thanks
Mike C
July 5, 2010 at 2:27 pm
Skipped overquotes.
Mike C (7/5/2010)
Hay slavut, no problem. I can't speak for the author, but this being a series my expectation is that he plans to introduce more complex examples in later installments.
Let's see. Hope for the best. But first two parts IMHO are of "3 of 5" in sence of examples or worse. Brilliant expected examples can't help it.
Mike C (7/5/2010)
Here are a few reasons why XPs are deprecated:1) Extended stored procedures run as unmanaged code in the SQL Server process space. They can very easily corrupt the SQL Server memory space. In addition to the annoyance of having to restart the SQL Server service when the memory space gets corrupted, this poses a potential security risk since process/memory space corruption often leads to backdoors into the system.
2) The extended stored procedure API was never well documented. There are a lot of potential pitfalls to trip up programmers on their first few attempts at writing extended stored procedures, and they won't know about them until after they've attempted to write one.
3) Because extended stored procedures are unmanaged, you are completely responsible for managing the resources your XP uses. Poorly written XPs have caused a lot of memory leaks over the years, many of them severe (i.e., requiring regular reboots to clear them up).
As you point out, .NET will cost you a bit of efficiency but even with that small efficiency hit SQL CLR has proven to be more efficient at row-by-row procedural processing than T-SQL cursors in many cases. If the small efficiency hit you take with .NET is a concern, I'd think the far worse hit you take with a T-SQL cursor would be a bigger concern.
Mike C (7/5/2010)
1. If you're writing extended stored procs already then you already know C or C++. If you know C or C++, C# is not a big learning curve. In fact a lot of the base C# functionality is similar to the MFC and STL library functionality that C++ programmers are familiar with (well, except that it's been made easier to use).2. The .NET platform is well documented, the extended stored proc API is not so well documented. Learning the intricacies of the deprecated extended stored proc API is going to be much tougher than learning the well-documented .NET classes.
3. See #2. Unless you're infusing your XPs with the goodness that is COM (we won't even address how much fun learning COM from scratch is) or something then you're not particularly working with a well-defined "object model" to begin with. Assuming someone doesn't know COM already, then they would be learning their first object model with .NET -- which, for my money, is a much simpler object model than COM.
Clear.
1) and 3) are the same. To be short, you've got two points:
a) ESP are unmanaged and can harm
b) ESP API is undocumented and may be misused so ESP can harm
For me ESP API was much more easy than .NET to start with. I've wrote some XP (that was Delphi). In fact I had to learn halfdozen or dozen functions in API and two SP to register/unregister ESP. As I was familiar with such matters as pointers and calling conventions I did not found any critical difficulties.
I should notice I sad nothing about using COM in your (or my) ESP and difficulties connected with it above. But if I wanted to use COM the only thing I would is calling some COM objects. Calling COM objects requires minimum from programmer whatever language you use. In fact calling COM within TSQL requires more code than in Delphi or VBA or VBScript.
If look wider, both quotes above are in sence ".NET vs Win API and why .NET is better". Or "why managed is better than unmanaged". Do not want to start just anther holywar and doubt we'll say something new. My proposal is do not discuss further in this direction.
Mike C (7/5/2010)
Absolutely. T-SQL needs to catch up to the standard in this regard. There are several Connect items posted by Itzik Ben-Gan that address this lack of functionality--you might consider voting for them. When it does finally make it's way into T-SQL you will be able to use Windowing functions in your set-based code that allow you to perform substantial calculations without having to rescan tables to perform subquery calcs.
I'll try.
Mike C (7/5/2010)
That reminds me -- another solution would be to use a tool that's built specifically for these types of aggregations and calculations--something like SSAS.Thanks
Mike C
I'll try it, thanks.
July 5, 2010 at 3:26 pm
slavut (7/5/2010)
Skipped overquotes.Mike C (7/5/2010)
Here are a few reasons why XPs are deprecated:1) Extended stored procedures run as unmanaged code in the SQL Server process space. They can very easily corrupt the SQL Server memory space. In addition to the annoyance of having to restart the SQL Server service when the memory space gets corrupted, this poses a potential security risk since process/memory space corruption often leads to backdoors into the system.
2) The extended stored procedure API was never well documented. There are a lot of potential pitfalls to trip up programmers on their first few attempts at writing extended stored procedures, and they won't know about them until after they've attempted to write one.
3) Because extended stored procedures are unmanaged, you are completely responsible for managing the resources your XP uses. Poorly written XPs have caused a lot of memory leaks over the years, many of them severe (i.e., requiring regular reboots to clear them up).
As you point out, .NET will cost you a bit of efficiency but even with that small efficiency hit SQL CLR has proven to be more efficient at row-by-row procedural processing than T-SQL cursors in many cases. If the small efficiency hit you take with .NET is a concern, I'd think the far worse hit you take with a T-SQL cursor would be a bigger concern.
Mike C (7/5/2010)
1. If you're writing extended stored procs already then you already know C or C++. If you know C or C++, C# is not a big learning curve. In fact a lot of the base C# functionality is similar to the MFC and STL library functionality that C++ programmers are familiar with (well, except that it's been made easier to use).2. The .NET platform is well documented, the extended stored proc API is not so well documented. Learning the intricacies of the deprecated extended stored proc API is going to be much tougher than learning the well-documented .NET classes.
3. See #2. Unless you're infusing your XPs with the goodness that is COM (we won't even address how much fun learning COM from scratch is) or something then you're not particularly working with a well-defined "object model" to begin with. Assuming someone doesn't know COM already, then they would be learning their first object model with .NET -- which, for my money, is a much simpler object model than COM.
Clear.
1) and 3) are the same. To be short, you've got two points:
a) ESP are unmanaged and can harm
b) ESP API is undocumented and may be misused so ESP can harm
Well, you did ask for a "couple" of reasons. But not to put too fine a point on it, let me restate:
1) XPs are a potential security risk (they can be abused to create security holes in the unmanaged SQL Server process space).
2) The XP API is not documented very well. It's a left-over from the Sybase days.
3) XPs do not automatically manage memory, which can result in memory leaks that may require regular reboots to clear up.
You might consider #1 and #3 the same; I consider them to be different issues. One is a serious security issue; the other is generally an inconvenience (albeit a *bad* inconvenience in a production setting) that occurs when the XP does not properly release memory.
For me ESP API was much more easy than .NET to start with. I've wrote some XP (that was Delphi). In fact I had to learn halfdozen or dozen functions in API and two SP to register/unregister ESP. As I was familiar with such matters as pointers and calling conventions I did not found any critical difficulties.
I should notice I sad nothing about using COM in your (or my) ESP and difficulties connected with it above. But if I wanted to use COM the only thing I would is calling some COM objects. Calling COM objects requires minimum from programmer whatever language you use. In fact calling COM within TSQL requires more code than in Delphi or VBA or VBScript.
If look wider, both quotes above are in sence ".NET vs Win API and why .NET is better". Or "why managed is better than unmanaged". Do not want to start just anther holywar and doubt we'll say something new. My proposal is do not discuss further in this direction.
I just mentioned COM because you mentioned an Object Model -- and the XP API doesn't really have a formal one. I would agree that calling COM objects is not all that difficult; BUILDING them, however is a little trickier. You asked for the reasons why XPs are deprecated, and I've given them to you. If you're already heavily invested in the XP API and unmanaged code, and don't want to learn .NET, you should definitely run with that.
Mike C (7/5/2010)
That reminds me -- another solution would be to use a tool that's built specifically for these types of aggregations and calculations--something like SSAS.I'll try it, thanks.
No problem 🙂 The nice thing about SQL Server and the Microsoft BI Stack in general is that there are often many ways to attack a single problem. Whether it's cursors, XPs, or .NET CLR assemblies as you point out; or an external tool like a business logic layer in the middle tier, a front-end application like Excel, an ETL tool or a specialized tool like SSAS there's often more than one way to skin the cat.
Thanks
Mike C
July 19, 2010 at 7:44 am
I've got a horrendous cursor problem, to do with calculating interest owed on accounts. The summary is that rather than use a simple calculation to work out the interest, the method is as follows:
for each relevant record (in the 100,000s) (accessed via a cursor)
for each interest period (another cursor), i.e. Mar - Jun might be on a different interest rate from Jul-Aug
for each day within the interest period, calculate the daily interest, round down to the nearest 0.001p, and then add to the total - move to the next day
Rather than put the whole mess down, I'll only include the inner loop for now, as that's the one I'm most confused by:
Any thoughts welcome - I'm thinking of looking into a CLR, but the only other time I tried one (in a previous job) I was badly burned when it performed fine under load testing only to crash in production. The company I work for now has very set development methodology, so I'll only be able to push for CLR if it demonstrates an amazing improvement!
Thanks!
declare
@DailyRate float
,@Amount money
,@Period int
,@Slice bit
,@DailyInterest float
select @Amount = 1000 * 100 --actual amount * 100
select @slice = 0
select @Period = 366
select @DailyRate = 0.003 / 100 --daily rate / 100
while (@Period > 0)
begin
select
@Period = @Period - 1
,@DailyInterest = @Amount * @DailyRate
--print '@DailyInterest before ' + cast(@DailyInterest as varchar(20))
if @DailyInterest < 1
select
@DailyInterest = floor(@DailyInterest * 10) / 10
,@Slice = 1
else
select @DailyInterest = floor(@DailyInterest)
--print '@DailyInterest after ' + cast(@DailyInterest as varchar(20))
select @Amount = @Amount + @DailyInterest
--print cast(@Amount as varchar(20)) + ' - ' + cast(@DailyInterest as varchar(10))
end
--print '@Amount before ' + cast(@Amount as varchar(20))
if @slice = 0
select @Amount = floor(@Amount);
else
select @Amount = floor(@Amount * 10) / 10;
--print '@Amount after ' + cast(@Amount as varchar(20))
select @Amount = floor(@Amount) / 100
--print '@Amount after ' + cast(@Amount as varchar(20))
select @Amount
July 19, 2010 at 4:16 pm
kjc-1057161 (7/19/2010)
I've got a horrendous cursor problem, to do with calculating interest owed on accounts. The summary is that rather than use a simple calculation to work out the interest, the method is as follows:for each relevant record (in the 100,000s) (accessed via a cursor)
for each interest period (another cursor), i.e. Mar - Jun might be on a different interest rate from Jul-Aug
for each day within the interest period, calculate the daily interest, round down to the nearest 0.001p, and then add to the total - move to the next day
Rather than put the whole mess down, I'll only include the inner loop for now, as that's the one I'm most confused by:
Any thoughts welcome - I'm thinking of looking into a CLR, but the only other time I tried one (in a previous job) I was badly burned when it performed fine under load testing only to crash in production. The company I work for now has very set development methodology, so I'll only be able to push for CLR if it demonstrates an amazing improvement!
Thanks!
declare
@DailyRate float
,@Amount money
,@Period int
,@Slice bit
,@DailyInterest float
select @Amount = 1000 * 100 --actual amount * 100
select @slice = 0
select @Period = 366
select @DailyRate = 0.003 / 100 --daily rate / 100
while (@Period > 0)
begin
select
@Period = @Period - 1
,@DailyInterest = @Amount * @DailyRate
--print '@DailyInterest before ' + cast(@DailyInterest as varchar(20))
if @DailyInterest < 1
select
@DailyInterest = floor(@DailyInterest * 10) / 10
,@Slice = 1
else
select @DailyInterest = floor(@DailyInterest)
--print '@DailyInterest after ' + cast(@DailyInterest as varchar(20))
select @Amount = @Amount + @DailyInterest
--print cast(@Amount as varchar(20)) + ' - ' + cast(@DailyInterest as varchar(10))
end
--print '@Amount before ' + cast(@Amount as varchar(20))
if @slice = 0
select @Amount = floor(@Amount);
else
select @Amount = floor(@Amount * 10) / 10;
--print '@Amount after ' + cast(@Amount as varchar(20))
select @Amount = floor(@Amount) / 100
--print '@Amount after ' + cast(@Amount as varchar(20))
select @Amount
Tell ya what.... you document that mess properly (like why the condition for IF DailyInterest < 1) and I'll show you how to do it without a cursor and it'll do a million rows in just a couple of seconds.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 19, 2010 at 6:03 pm
kjc-1057161 (7/19/2010)
I've got a horrendous cursor problem, to do with calculating interest owed on accounts. The summary is that rather than use a simple calculation to work out the interest, the method is as follows:
It looks like a simple compound interest problem, although I have to agree with Jeff that the way you're doing it here is not exactly intuitive. All those extra divisions by 100 and multiplications by 1000 don't seem to make sense. Maybe you can explain some of the extra operations? You can calculate compound interest pretty easily without loops and cursors. There's a very good discussion of it here: http://en.wikipedia.org/wiki/Compound_interest
July 20, 2010 at 2:38 am
Thanks for both responses! The problem isn't the calculation of compound interest, but the rounding issue, hence all the floor() functions. Each day, the interest is calculated only in whole pence (rounded down). I need not add that this is inherited code!
I've trimmed some of the extraneous bits to leave just the basics - I've also added an @AmountSimple variable to calculate normal compound interest along side, so you can see the difference it makes. If this was what was required, I'm sure I could rewrite with a mathematical calculation (thanks for the wikipedia link!), but I'm pretty sure the rounding throws this out.
declare
@DailyRate float
,@Amount money
,@AmountSimplemoney
,@Period int
,@Slice bit
,@DailyInterest float
,@SimpleInterest float
select @Amount = 1000 * 100 --give amount in pence, so that floor can be used
select @AmountSimple = @Amount
select @slice = 0
select @Period = 500
select @DailyRate = 0.003 / 100 --fairly sure this is converting % to decimal
while (@Period > 0)
begin
select
@Period = @Period - 1
,@DailyInterest = @Amount * @DailyRate
--for simple interest, use daily interest
--for our calculation, round the interest DOWN to the nearest penny
select @SimpleInterest = @DailyInterest
select @DailyInterest = floor(@DailyInterest)
--increment both amounts by the relevant interest
select @Amount = @Amount + @DailyInterest
select @AmountSimple = @AmountSimple + @SimpleInterest
end
--round both amounts down and present in pounds
select @Amount = floor(@Amount) / 100
select @AmountSimple = floor(@AmountSimple) / 100
select @Amount, @AmountSimple
July 20, 2010 at 6:02 am
Btw, the bits I've trimmed out from first time around are to do with when the interest calculated is < 1 - in this case, the precision is kept a bit more, rather than having no interest at all added...
I'm sure this method has saved literally pounds over time.. and after all, everyone knows that developer time comes for free! 😉
July 20, 2010 at 9:46 am
4. To be more constructive, let me introduce real life example. Let's say you are stockbroker's progammer. Let's say that your clients make 100000 deal per day by 10000 clients and 60000 of them are made by your special client.
4.1.Your aim is calculate commission for each deal by following rules:
4.1.1. "Base" commission is calculated as deal_volume * rate. I.e. quantity_of_securities*deal_price*rate
4.1.2. If running total (sum of 4.1.1. for previous deals for client in current deal) of 4.1.1. after current deal is more than some border_for_this_client then commission is 0 else commission equals 4.1.1.
Problem is develope procedure to calculate commission for each deal. Set-based variant sucks because you have to calculate running total on 60000 records. It sucks if you have every index you can imagine. It sucks if you're moron and you don't know indexes at all.
I would argue that such business logic does not belong in the database. A middle-tier component would likely handle this better than the database as it can break up the request into multiple threads and there are better tools to test the business logic. In addition, it is cheaper to scale out the middle-tier than the database tier. So, if you have lots of situations like this simultaneously, you are going to create a bottleneck in your database. A DBMS was not made to handle all types of business logic.
Viewing 15 posts - 256 through 270 (of 316 total)
You must be logged in to reply to this topic. Login to reply