October 31, 2009 at 11:36 am
Comments posted to this topic are about the item The Rich Rewards of Tuning SQL
November 1, 2009 at 10:50 am
Excellent points.
I think that you ought to try to tune your code, you ought to go back and refactor it when you can. However deadlines, and time pressures can force you to compromise. At some point, it does make sense to throw hardware at it. In web development, where you can deploy another server and scale out, I think that it's harder to spend a lot of time tuning when for $2-3k you can throw a lot of hardware into 1 or 2 more servers. If a developer costs $70k, then you can eat up $2k of time quickly.
The issue with many web developers is also talent. So many programmers don't know how to tightly tune code.
In SQL Server, you have a single server. To scale up, it's replace a server, often with much more hardware and it can be $10k. So it makes more sense to spend time tuning. There are still limits. You can still spend too much time tuning and not get new development done, but it's more worth it since a new server costs more $$$. You also will run into scale limits. At some point you can't buy a bigger DB server to handle the load from poor code.
In either case, developers (SQL or otherwise) need to learn more about better coding techniques. Learn how to write better code that performs better with less resources.
November 1, 2009 at 11:31 am
Hi Tony
You speak right out of my heart. In my opinion to write faster code is not only a question of skills but also a mindset. To write tuned code is something people should internalize.
The best tuning is proactive and done by the architecture and system design! Tuning when a system became slow is reactive and often ends up with hacks and a "hope the performance is enough, for now" solution.
Greets
Flo
July 31, 2010 at 7:56 pm
I know this is an old post but I had to say "Hear Here!" to it. Great post Tony. Phil is absolutely correct about how even the worst code on certain challenges simply blows away what a lot of folks buy hotter iron for... and the hotter iron frequently doesn't work at all.
Here's to all those folks that go the extra mile in code! Well done! {insert sound of someone clinking a beer and drinking it heartily}
--Jeff Moden
Change is inevitable... Change for the better is not.
December 25, 2014 at 9:13 pm
woow great insights into tuning and performance management
December 26, 2014 at 8:48 am
No offense, but I think most of you are wearing rose colored glasses here.
Specifically, you have the following points backwards:
1) Failing to deliver a working product is guaranteed to fail your project, even if delivering it on time will have bottlenecks.
2) Foreseeing *ALL* the bottlenecks can really only be done by top people. Most of us will have to see it work to identify all the bottlenecks, even if we knew where some of them would be
3) Foreseeing scale of application and availability of investment ratios is something most of us will not be able to do from the trenches. These decisions often get made by management and bean counters. You will have to work with them, not complain about them.
4) Implication of #3 is that while it might be cheaper in TCO to make a tuned app from the beginning, the value/funding may not exist until gets to the point where it has the bottleneck. By then, iron may be 10x cheaper.
5) Throwing iron at a problem doesn't just make one problem faster, it usually makes most of it faster. This means you're not comparing a single problem to the cost of iron. You're comparing overall app benefit to the cost of new iron.
6) Before you buy iron, you obviously should identify what part of the iron to invest in. If your server is not CPU bound, buying faster CPUs will not help much.
It's my opinion that the best solutions are not the "slick" ones which take a crappy architecture and makes it blazing fast for one feature. It's the solutions which make it so some junior person can come in, learn one standard for how to know what goes where, learn one standard for coding (perhaps a couple of reference implementations), and the monitoring already in place will yell at them when they do their stuff wrong, before it makes it to production (examples: Integration Tests/Continuous Integration/Continuous Deployment).
In a properly delivered design, you would test at an appropriate scale to where you see the application going in the near future. I am *NOT* advocating a pre-optimization. I am advocating you deliver something tested for the requirements you have (or should have).
Most of the time, you can get one of the top answers just by implementing obvious patterns over and over. Once that stops working, you should implement a new pattern, not just a one-off.
I hate to see crappy apps as much as the next guy, but if we're talking about investment vs rewards, we should really talk about this from the business point of view, because that's how we deliver the value other people care about.
Just my two cents.
December 26, 2014 at 10:02 am
Of course, buying new hardware can get you some time. But how much time would you get if your app is full of crappy code? There's no piece of hardware that can handle massive nested-RBAR. Some time ago, I handled to improve a query that ran for over 30 hours and didn't finish to make it run in under 5 minutes. I assure you, more power wouldn't be a solution as the time to get it would have been more than the time it took to tune the code.
Yes, you might be going one step at a time, but you now that it will last forever. Tuning hardware is a fast and temporary solution that will demand attention in the near future. What will happen if management find out that the brand new server you got and solved the problem ended up with the same problem 6 months later? Is that cheaper?
It's not about pre-optimizing, is about doing things right from the beginning and correct anything that hasn't been done right. Changing configurations might help as well improving the performance on the whole system.
December 26, 2014 at 2:22 pm
Luis Cazares (12/26/2014)
Of course, buying new hardware can get you some time. But how much time would you get if your app is full of crappy code? There's no piece of hardware that can handle massive nested-RBAR. Some time ago, I handled to improve a query that ran for over 30 hours and didn't finish to make it run in under 5 minutes. I assure you, more power wouldn't be a solution as the time to get it would have been more than the time it took to tune the code.Yes, you might be going one step at a time, but you now that it will last forever. Tuning hardware is a fast and temporary solution that will demand attention in the near future. What will happen if management find out that the brand new server you got and solved the problem ended up with the same problem 6 months later? Is that cheaper?
It's not about pre-optimizing, is about doing things right from the beginning and correct anything that hasn't been done right. Changing configurations might help as well improving the performance on the whole system.
+1000.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 26, 2014 at 6:36 pm
My experience is that application tuning does not have to be expensive, and is usually far more cost effective and faster than upgrading hardware.
I have a methodology that I use for tuning applications that use stored procedures. I run a trace for an hour or so for all activity during peak loads, load the trace into a table and then run queries to identify the stored procedures that are the biggest aggregate users of CPU, reads, or writes. My usual experience is that by tuning the top 10 stored procedures you can increase application performance dramatically.
I originally developed this technique one day when our CIO decided to initiate an emergency effort to split a poor performing application to run across multiple SQL Servers. This would have never worked, and I realized we needed to do something quickly. Using traces, I was able to identify four stored procedures that used 97% of the total CPU usage, and with about 3 hours of tuning effort with 2 DBAs working on each procedure, reduced the average CPU usage from a steady 100% down to about 10%. Further tuning of additional procedures reduced it to about 7%. Problem solved in a way that additional hardware could never have done, and the effort was completed in less than 2 days.
Now I do this proactively for critical applications to identify stored procedures that could benefit from tuning. I run a trace, look at the top resource usage by stored procedure, and tune the ones that could benefit. It is important to continue tuning on a regular basis, since new or changed procedures may become performance problems.
December 26, 2014 at 6:47 pm
Let's be clear here.
Any query that takes 30 hours to finish isn't poorly written. It's just broken. Yes, you fix that stuff.
My normal metric for a query if it's correct, it will typically run in < 100ms if it's simple, or < 1sec if it's a report, or < 1 min if it's along the lines of a year end report done on transactional data.
The thing is, if you do those year end reports everyday, then 1 min is for me, broken. It should be redone to be < 100ms.
In no way should what I said be construed to say that a system with hundreds of gigs of randomly generated/stored transactional data, and crappy RBAR code can suddenly become usable if you throw more iron at it.
Not sure if that helps, or just adds more sparks on this subject.
December 27, 2014 at 3:08 am
My greatest fear is that some tech debt ridden piece of crud gets delivered to live and becomes my headache to support for ever. I get penalised for not spending time on taking the data estate forward by the very people who saddled me with the albatross that is the reason I don't have time to take the data estate forward. The tech challenges of tuning the data turd are not the issue, its the business processes that ham string attempts to do so. Can't work on something without it being an approved stream of work etc.
There is a reason that tech debt is so hard to fix[/url].
I know of one DBA who was confronted by a development team telling him he was obsolete in a world where you scale out. His response was "why would you need to scale out for this amount of data"?
An even better one was when he was accused of being a one trick pony. "Yes, but its a bloody good trick isn't it. We serve data to the customer 24/7/365 reliably in single digit millisecond response".
Buying more tin is a dead end solution. On a per CPU license large tin costs big money. Let us suppose you are not CPU bound but are memory bound instead. A low end server can only take a certain amount of memory. Move to a higher grade server that can take the memory and BOOM more CPUs and more cost.
There ain't no silver bullet but tuning stuff up comes the closest
December 27, 2014 at 10:35 am
jhgoodwin (12/26/2014)
No offense, but I think most of you are wearing rose colored glasses here.Specifically, you have the following points backwards:
1) Failing to deliver a working product is guaranteed to fail your project, even if delivering it on time will have bottlenecks.
2) Foreseeing *ALL* the bottlenecks can really only be done by top people. Most of us will have to see it work to identify all the bottlenecks, even if we knew where some of them would be
3) Foreseeing scale of application and availability of investment ratios is something most of us will not be able to do from the trenches. These decisions often get made by management and bean counters. You will have to work with them, not complain about them.
4) Implication of #3 is that while it might be cheaper in TCO to make a tuned app from the beginning, the value/funding may not exist until gets to the point where it has the bottleneck. By then, iron may be 10x cheaper.
5) Throwing iron at a problem doesn't just make one problem faster, it usually makes most of it faster. This means you're not comparing a single problem to the cost of iron. You're comparing overall app benefit to the cost of new iron.
6) Before you buy iron, you obviously should identify what part of the iron to invest in. If your server is not CPU bound, buying faster CPUs will not help much.
It's my opinion that the best solutions are not the "slick" ones which take a crappy architecture and makes it blazing fast for one feature. It's the solutions which make it so some junior person can come in, learn one standard for how to know what goes where, learn one standard for coding (perhaps a couple of reference implementations), and the monitoring already in place will yell at them when they do their stuff wrong, before it makes it to production (examples: Integration Tests/Continuous Integration/Continuous Deployment).
In a properly delivered design, you would test at an appropriate scale to where you see the application going in the near future. I am *NOT* advocating a pre-optimization. I am advocating you deliver something tested for the requirements you have (or should have).
Most of the time, you can get one of the top answers just by implementing obvious patterns over and over. Once that stops working, you should implement a new pattern, not just a one-off.
I hate to see crappy apps as much as the next guy, but if we're talking about investment vs rewards, we should really talk about this from the business point of view, because that's how we deliver the value other people care about.
Just my two cents.
It's a funny thing... you've provided us a numbered list of scenarios where you think some of us are looking through rose colored glasses and then you follow that up with the very things that we normally insist on to help prevent the things in the list and more from happening. 😉
There's much more to it, though...
Contrary to what you've implied, I've found (been through several such upgrades) that the shift to heavy iron rarely improves everything and that which is improved in nearly trivial compared to what can be done just by following some very well established knowledge factors such as the rules affecting SARGability and the understanding that good, high performance, set based code doesn't mean doing everything in one big query. Of course, it also means reworking some of the exiting code (the subject of the article this discussion is attached to) but the fixes are normally trivial compared the hardware cost and the huge hidden costs of migrating to "better" hardware especially since hardware-induced scalability (up or out) requires the purchase of new hardware every 3 or 4 years just to "keep up" if "improved" hardware is the only "fix" applied.
Every company that I've had the pleasure of working for has had the same problems as in your numbered list (and much more) and most of the people have had that "in the trenches" feeling. If you're interested, I could give you an overview of the things I've done to help change all that for those companies. As you've said, it does require an understanding of "investment vs rewards" but I've found that most people (not just managers, either) totally miss what that truly entails and end up investing in the wrong things for little reward.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 27, 2014 at 10:40 am
jhgoodwin (12/26/2014)
Let's be clear here.Any query that takes 30 hours to finish isn't poorly written. It's just broken. Yes, you fix that stuff.
My normal metric for a query if it's correct, it will typically run in < 100ms if it's simple, or < 1sec if it's a report, or < 1 min if it's along the lines of a year end report done on transactional data.
The thing is, if you do those year end reports everyday, then 1 min is for me, broken. It should be redone to be < 100ms.
In no way should what I said be construed to say that a system with hundreds of gigs of randomly generated/stored transactional data, and crappy RBAR code can suddenly become usable if you throw more iron at it.
Not sure if that helps, or just adds more sparks on this subject.
Actually, that helps a lot and is more in line with what I just posted. It DID previously sound like you were saying that big iron was the "investment vs rewards" fix, especially since you stated that some of us might be wearing rose colored glasses. 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
December 29, 2014 at 7:26 am
There always seems to be that balance to keep. Yet I try to write it as efficient as I can the first time so there doesn't need to be a rewrite.
December 30, 2014 at 4:29 am
During development writing code is only part of the story.
What I usually do is spend significant extra time in the data modelling phase.
With respect to performance, primarily how the data will be accessed and what this implies for the queries (even make some test queries to help my though process).
It can be surprising how much this influences decision making and clarifies problems before actual code writing even begins.
Once a good model and minimal indexing as part of this model is in place, writing efficient code it much easier.
Schema's developed this way sometimes differ quite a bit from what is the starting point (norm) and require less resources.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply