April 4, 2010 at 4:08 am
The Dixie Flatline (4/3/2010)
Including moderation.To heck with Russell's Paradox.
[font="Verdana"]Easily resolved with[/font] OPTION (MAXRECURSION ?)
April 4, 2010 at 1:18 pm
I think it's a great thing that you can get some really great performing well thought out solutions on the forums and they should definitely be suggested. Whether or not you have time to implement them in your particular scenario is a decision you can make.
Like Bob said, I seem to remember from past posts that *you* definitely have the ability to do any of these things and much more; so I hope you didn't take anything I sad as calling you incompetent either. I play a lot of devil's advocate and was debating the statement in general, not as it applied to you. I try to keep in mind that many times topics on these forums continue to help people for years and will be all over the internet forever, so I try to cover more of the bases.
That last bit is partly why, even though many of us have hacked together code in our own environments, we don't often suggest it here. Usually, when I'm going to post any code online, even if it's code I'm currently using for something, it goes through another review cycle to see if I'm doing anything that could be improved upon.
April 4, 2010 at 1:53 pm
The Dixie Flatline (3/26/2010)
If a report is run once or twice a day, and you can take the runtime down to 10 seconds instead of several minutes, is it really worth the human effort and extra disk space to rethink your indexing strategy for the sake of dropping the time down to two seconds?
I'll say that it is. 🙂 It's called "practice" for the other 200 reports that only take "several minutes". It will also server as a better example of how to write code in the future so that when someone copies the code because "it's close to what is needed", it actually stands the chance of being performant and maybe teaching someone "Divide'n'Conquer" and a couple of other high performance measures.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 4, 2010 at 2:03 pm
kramaswamy (4/2/2010)
Sometimes I read comments on these forums, and others, where people are recommending very elegant solutions to problems, however, they don't always take in to account the amount of time that would be required to correctly implement the solutions.
Heh... as you say, "on the flip side"... it's amazing to me how much time people have to implement incorrect solutions that will need to be reworked over and over and over rather than spending just a little extra time doing it right. Last I heard, doing it right was supposed to be a part of the job and anything that will break in the face of even small scalabilty changes just isn't doing it right.
I will agree that some of the "very elegant solutions" seem a bit long winded to the person asking the question. What people don't understand is that they usually leave out 7 little but very important words from the phrase "Good enough" and those are "for now, but it will break later." 😛
And, nope... not picking on you... you're post just gave me a chance to vent a bit on a sore spot. Thanks. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
April 4, 2010 at 2:45 pm
It will also server as a better example of how to write code in the future so that when someone copies the code because "it's close to what is needed", it actually stands the chance of being performant and maybe teaching someone "Divide'n'Conquer" and a couple of other high performance measures.
This is another good point that is tough to rebut, because code does get perpetuated. (Ideally code reviews should take care of that, but it's not an ideal world.) But I still stand by the perspective of my example. Going from 10 to 2 seconds, even if there are 100 such monthly reports, represents saving only 800 seconds (15 minutes) of machine time a month.
What does it cost to save those 15 minutes and how much are those 15 minutes really worth to the organization paying the developer's salary? Couldn't the developer's time be better spent improving the performance of transactions that run hundreds of times a day? Or developing new transactions that are going to make or save the company a lot of money? The point is that you can't always just look at one problem in isolation. You have to look at all the organization has to do and get the most bang for the buck out of everyone's time.
Jeff, you know I can be a real stickler about performance times, and I would never argue that people shouldn't improve, learn better techniques, and put them into practice. I no longer work with temp tables like I used to years ago, and I just spent weeks rewriting one of our core legacy procedures so that it was a single nasty-fast query instead of a set of queries with temp tables.
It took weeks because I explored and tested alternatives, coding and recoding, building and expanding indexes, testing again, until it was flexible and as fast as I could make it. But the proc gets called hundreds, if not thousands of times a day, and evaluates data from thousands and thousands out of hundreds of millions of rows. I could not have invested that kind of effort for a single monthly report.
Yes, it is possible that bad code can grow and be perpetuated until it brings a system to its knees, and it is also possible that a missed deadline could cost a company a lot of money because someone was using a report for "practice".
I'm just saying it's a judgment call. 🙂
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
April 4, 2010 at 3:06 pm
The Dixie Flatline (4/4/2010)
Going from 10 to 2 seconds, even if there are 100 such monthly reports, represents saving only 800 seconds (15 minutes) of machine time a month.
Developer A writes the 10-second report. It performs 1.5 million logical reads, uses 144 seconds of CPU, and uses 2GB of memory.
Developer B writes the 2-second report. It performs 94,000 logical reads, uses 2 seconds of CPU, and uses 74MB memory.
Fact 1: Developer A has written most of the code base.
Fact 2: There is an unsigned $25M purchase order for new hardware on your desk
Spot the connection 🙂
April 4, 2010 at 9:11 pm
That argument ignores the context, the concept of opportunity cost, the concept of diminishing returns, and the value of human time. All it does is use the ancient dialectic trick of pushing an example past its scope to the point of absurdity.
It works both ways.
Developer A listens to Developer B about the need to optimize performance. They then spend all year going back through legacy code tweaking it for performance gains although no one is complaining about performance bottlenecks. They compel the other developers to do it as well so they can all learn better. It doesn't matter if its a core transaction or a quarterly report, or if the hardware is showing no strain, they work at saving cycles and logical reads. At the end of the year they go to brag to their boss about saving the company the cost of of hardware and are told that because of late-delivery penalties or cancellations on several key contracts the company is going under. They are handed their final paychecks. Spot the connection?
Is the above example outrageous? No more so than a company needing to spend $25 million to get 100 monthly reports in 3 minutes, instead of 15 minutes.
Again and again and again, it's a judgment call. A company just might want developer time spent on getting out new applications to increase net profits by $50 million as opposed to saving $25 million in hardware costs. But they also want those monthly reports. Sometimes, "optimal" is knocking out the report in an hour instead of four hours so more time can be spent on a more profitable project. If it can be done in a reasonable time with optimal performance, great.
Hierarchical databases used to outperform relational databases by such a wide margin that SQL was considered good for reporting only. A relational DBMS processed transactions slower and ate up more disk space. Why did relational survive and grow in market share? Because it was more flexible and it saved developer time. These advantages proved to outweigh the additional hardware costs.
If y'all want to say that discussion of human costs and project priorities are irrelevant in this forum, and that all discussions are to be confined to finding the opimally-performing solution only, then we're done. I've already conceded (quite readily) that temp table solutions are slower than well written queries supported by proper indexes. Time to move on.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
April 5, 2010 at 1:00 am
The Dixie Flatline (4/4/2010)
That argument ignores the context, the concept of opportunity cost, the concept of diminishing returns, and the value of human time. All it does is use the ancient dialectic trick of pushing an example past its scope to the point of absurdity.
Of course it does - doing anything else would (a) not have made my point for me; and (b) not have been amusing.
So...we're saying 'it depends'?
Actually, to make a serious point for a moment, it is always a trade-off. I do try to write efficient code, and produce reasonably sound designs, because I value my future time almost more than present time. It is always a compromise, of course - and I have hacked more than my fair share of quick and dirty fixes over the years.
April 5, 2010 at 6:25 am
Absolutely, it depends. ("It depends, it depends." sounds redundundant.)
To me that means the same thing as "It's a judgment call." What I'm curious about is under what circumstances it depends (in other people's judgment). Most of the regulars here are people who can appreciate the need to balance and optimize systems, but not everyone turns that same critical eye on organizations, or on themselves as components in those organizations.
I've known some people who would tell you all about the sins of operating without a tested backup/recovery scheme, of having single points of failure, or of having bottlenecks that could be solved by a more distributed architecture. But these same people would be quite content to have an organization in which they themselves were the bottleneck, or single point of failure.
I suppose if I want to hear some debate on that, I should really start another thread.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
April 5, 2010 at 6:55 am
The Dixie Flatline (4/4/2010)
This is another good point that is tough to rebut, because code does get perpetuated. (Ideally code reviews should take care of that, but it's not an ideal world.) But I still stand by the perspective of my example. Going from 10 to 2 seconds, even if there are 100 such monthly reports, represents saving only 800 seconds (15 minutes) of machine time a month.
You've changed the "stats". Previously you said the code took "several minutes", not 10 seconds. I'd probably let 10 second code sit a bit until I resolved the larger or more frequent needs but, as you say, "It Depends"...
If these reports are only called a time or two a day, then there's likely going to be other things that need to be done first. However, if these 10 second reports are called hundreds of times a day, then there would be some urgency in redacting this 10 second code to run not in 2 seconds, but in something less than 1 if possible.
The other thing is, if these 10 second reports are customer facing, it may be in the company's best interest to make them run faster because of customer perception.
Yes, it is possible that bad code can grow and be perpetuated until it brings a system to its knees, and it is also possible that a missed deadline could cost a company a lot of money because someone was using a report for "practice".
I'm just saying it's a judgment call. 🙂
Heh... I know you know better than this but I'll ask the question anyway... why does everyone think that writing good code to begin with is going to cost a missed deadline?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 5, 2010 at 10:19 am
Paul White NZ (4/5/2010)
So...we're saying 'it depends'?Actually, to make a serious point for a moment, it is always a trade-off. I do try to write efficient code, and produce reasonably sound designs, because I value my future time almost more than present time. It is always a compromise, of course - and I have hacked more than my fair share of quick and dirty fixes over the years.
Yes it does depend. I would prefer to write efficient code. I would also love to go back and improve a lot of code that is out there. The problem is time. If it comes my way and I have time to do it more efficiently, I will do it as efficiently as I know how. Of course, there are the occasions when something comes my way, looks like crap, and turns my head to mush but it works fine. I might be inclined to leave it alone until a later date when I better understand what it is supposed to be doing (speaking of the code).
I value my future time a lot, and spending a little time now to save a lot of time later is often better for me. It is like trying to fix a bug. There are two schools of thought - band-aid the bug after it is released because there is a deadline, or fix it right before it gets released. The band-aid approach often leads to several more hours of redevelopment time and troubleshooting time because people forget about the bug. Fixing it correctly before code is released to production will cause a deadline to be missed. Which costs more? I hate to say it, but it depends.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 6, 2010 at 7:49 am
My situation's actually kinda a good example for this discussion. See, when I joined the company I'm in right now, it was just after I graduated from university. I had absolutely zero SQL experience. As I went along in the company, I picked it up from second-hand experience - other people in the company had code written which I gleaned knowledge from, and that which they didn't have, I looked up online and took example from other people's code.
So, if I look through the linear progression of my code, it's pretty clear that there's a great deal which could be improved upon. Every so often, I stumble across a piece of code I wrote a year ago, that is doing a "SELECT *", or some SQL code that is written directly in the ASP page instead of through a stored proc, and sit down and correct it when I notice it - but I've never just gone through code for the express purpose of doing that.
I think that it could be an interesting summer job for an intern, to go through and catalogue all the areas of the code where there are SQL improvements that could be made, and then give them a weighting in terms of their current cost relative to their current use as well as the cost to fix. But barring that, I don't really see it being a feasable option to actually re-write code simply for the reason of making it more elegant and performant.
Yes, it's most certainly true that bad code adds up, and eventually your system's going to start feeling the negative impacts - but, at least as far as I'm concerned, you can't really devote time soley to going back and cleaning up code, because you're going to have nothing concrete to demonstrate for your efforts.
April 6, 2010 at 8:08 am
Good points. In my experience, existing database-hitting code is generally only revisited for 3 reasons:
1) fix a bug
2) add a feature
3) performance becomes unacceptable.
#3 lends itself very nicely to "outside help" and is one of the things I have focused on in my consulting since there is no shortage of it in the wild. 😎
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 7, 2010 at 1:37 pm
Jeff, my example used daily reports, not monthly like I kept saying, but I didn't change anything about the runtimes. Here is what I posted and what I meant.
If a report is run once or twice a day, and you can take the runtime down to 10 seconds instead of several minutes, is it really worth the human effort and extra disk space to rethink your indexing strategy for the sake of dropping the time down to two seconds?
1. Suboptimal solution drops runtime down to 10 seconds. Solution was quick-and-dirty but was accomplished quickly.
2. Rethinking index strategy to get rid of temp table reduces runtime from 10 seconds to two seconds. Performance is much better, but greater time is invested, and the time saved is not nearly as significant, so its a diminishing return on investment.
However, being run daily it might have a detrimental impact on transaction performance. If it were part of a nightly reports generation it might not.
Even I wouldn't think of tolerating a report query that ran for several minutes.
As for writing good code to begin with, I work very hard to do that now. It seems that not a week goes by that I don't find something new and better. But everyone is on a learning curve and everyone has a lot of demands on their time. (At least everyone I work with.)
Finally, the missed deadline example was supposed to be absurd.
I'm going to be offline for a good part of the next 6-7 days. Y'all take care.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
April 7, 2010 at 5:46 pm
kramaswamy (4/6/2010)
I think that it could be an interesting summer job for an intern,...
Heh... summer intern evaluating code for performance problems. I believe that probably won't help.
...because you're going to have nothing concrete to demonstrate for your efforts.
I always have something concrete in this area because I always do a "Before'n'After" set of tests and measurements. You can bet consultants like Kevin do...
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 31 through 45 (of 52 total)
You must be logged in to reply to this topic. Login to reply