August 16, 2007 at 4:57 pm
I wrote an editorial recently on challenging yourself and choices in tuning code. It got me thinking about a poll though, so I decided to keep writing when I finished that editorial and write this one.
A long, long time ago, in a galaxy far, far away. Actually it wasn't far away, it was Denver and the time was the Internet Dot Com pre-bomb 🙂
In any case, we were building a web application and on a quick release cycle. We dropped new functionality every week, so most things that we built were in development for 1-2 weeks. We often refactored or changed older code as a part of the weekly cycle to shore it up, but we often came into a debate on just how much tuning work we should do.
We had some junior web developers (insert shudder here) and I found them writing queries, then looping through the results and sending off a query for each row of the first result set. One particularly brilliant fellow actually built a third level of a query for each row of the results of a query that was run for each row of a grandparent. Needless to say I wasn't thrilled since some simple joins could have eliminated the looping queries. However I just happened on the code as I was doing something else and didn't see any performance issues on the web site.
As an aside, I did demonstrate during a load test that this was a problem, so it was fixed.
However, we got into an argument where he felt the bandwidth of a joined result set, the cycles spent processing that, etc. were similar to the efforts of multiple queries and we should let him code the quickest he could and buy more hardware. Despite wanting to use a dense metal object to help him acquire some knowledge, he had a point. To the surprise of my boss, I had a conversation with him and some senior developers in which we dealt with this question:
How much is time should be spent on performance tuning code?
We knew what a web server cost, about $4k, and we knew what a bigger database server cost, more like $10k, but these figures paled in comparison to what a senior, performance-knowledgeable web developer cost, about $90k. So we realistically sat down and tried to figure out how much time was it worth for someone to performance tune code. After all, if I have a scalable app (scale out) at $5k for another server and that means I can handle 200 more users, is it worth my web developer to write better performing code? If it's an extra week for better code, that's $2k a week.
EACH TIME I NEED THIS
Getting more functionality in that week might be worth more since the server is a one time cost. Even if I buy a new server every month, I'm gaining. Granted the code has to work as expected and we weren't in a mission critical area, we have limits because the database can only scale so far, etc. However it was something that we ended up quantifying for various modules we'd allow a day, half-day, etc. to tune code.
So I'm trying to ask a serious question here. Time is always a limited resource, people are more expensive than hardware to a point. How much tuning is it worth to you?
Steve Jones
August 16, 2007 at 11:21 pm
Buying new hardware to fix a performance problem in coding, is like plugging a leaky hole with the finger. As the number of transactions increase you are still dealing with poorly written code. I've seen too many times where poorly written code that causes performance problems have to be rewritten properly anyways. The best thing to do is to rewrite the code before it becomes too much of a disaster later on.
August 17, 2007 at 12:18 am
The answer is probably to educate the programmers in the best way to create the queries rather than allowing them to do it inefficiently. Even if you agree that hardware can compensate for query speeds (and for smaller, non-scalable applications this may be an effective approach), swapping servers isn't generally a simple cost exercise - there's validation and testing of hardware etc that would generate ancillary time/cost expenditure.
August 17, 2007 at 1:13 am
Speaking from a maintenance programmer's point of view - it may be the quick way to get your code out the door, but you end up with something that is going to be hard to maintain and expand (and will make people like me angry).
I'm a big fan of doing it right the first time around, because, almost invariably, you wind up saving yourself time, money and stress.
August 17, 2007 at 2:42 am
Trying to cost out how much poor coding will cost you in terms of hardware is pointless because you can't possibly say what extra resources a poor piece of code will end up consuming in the future.
For example, consider this scenario, you deploy your latest release and the web servers CPU is pegged to nearly 100% - how many extra servers do you buy? let's say you buy one and add it into the farm and they both end up at 100% - do you go for a third server or bite the bullet and performance tune the code? What if someone uses the poorly written code as the basis for a new piece of functionality (cut 'n paste inheritance) thus propagating the stupidity further?
After more than 20 years in this business (dealing with all aspects of systems delivery including budgets) I have heard the phrase "just buy more hardware" too many times and, in *every single case*, it was from a programmer trying to defend a moronically or ignorantly written piece of code. I understand where you were trying to go with the whole cost/benefit analysis but it is ultimately flawed as I could write a two line program that will consume ANY amount of hardware you can throw at it. An extreme example but I'm sure you get my general point...
The real answer is twofold:
1. Make sure at the interview stage that you do a proper technical assessment of someone's skills. If you expect them to write SQL (and mostly everyone does nowadays) then have a couple of questions that will demonstrate if the candidate can think "set based" instead of "procedural" i.e. they understand the pros and cons/strengths and weaknesses of each layer of the system they will be developing.
2. Train your developers regularly and invest in improving their skills
3. Have a good QA process that includes peer code reviews, load testing etc.
Ok, that's three...what can I say, I'm a Monty Python fan....
There is no substitute for efficiency, especially with the current green concerns regarding data centre power consumption etc. - it's like leaving all the lights on in the house so you don't have to bother using the time consuming switch thing.
Hang on - I've thought of a fourth - if you've done the above three and you still get anyone who says "throw hardware at it" - use the dense metal object when no one is looking and hide the body in the closet
Malcolm
DB Ghost - Build, compare and synchronize from source control = Database Change Management for SQL Server
www.dbghost.com
August 17, 2007 at 2:45 am
"If you think education is expensive, try ignorance". Educating staff so they can write efficient applications is the most cost-effective way to go, but what about the applications they wrote while they are learning...
Like all activities, tuning has got to be cost-effective. If you can get over poorly performing code by buying an exta server, then that may be the best solution. If you predict that with current performance you will get so many servers you need a new server room in 6 months time, then fixing the code probably looks a no-brainer. But if you never bothered to educate your developers, how will they know how to improve the performance...
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
August 17, 2007 at 5:35 am
This is very much a subjective time vs. benefit question.
I have spent half a day improving the speed of a query from 10 minutes to 30 seconds, especially when that query was going to be run often in a test environment, even though in the final deployment it was going to be run only one time.
I have also spent half a day attempting to improve a query with no end improvement. However, if I learned some things to do (or not do) in the future to improve my work, then it may very well have been worth the effort.
However, if someone came to me and told me to buy more hardware so they could continue to produce (bad) code as quickly as possible, I would show them the door as quickly as possible and use the (hardware) money to buy a better programmer instead.
August 17, 2007 at 5:55 am
It's interesting that you assume that another server is a fixed, one-time cost. This is clearly not the case. Assuming a new server is $5k (where are you getting this deal?), you must have some ongoing software licensing and support on the server. This is usually about 20% of the initial cost. So, your extra server is $5k + $1k per year for three years.
Why do I say three years? That's when you probably should replace it. So, for each new server, you have a fixed cost of $8k every three years. That is still not much, but it is not a one-time cost.
A bit less simple is the cost of upgrading the software. I assume that even though this is an internally developed application, you understand that it will have a life-cycle and will need eventual upgrade and/or replacement. Now, a well-developed application will usually be easier to upgrade to support newer operating systems and benefit from newly available controls. However, even a well developed application that has been scaled out over 20 servers becomes an increasingly difficult upgrade and the cost of a test environment alone may become very high.
August 17, 2007 at 6:28 am
I don't want to hijack Steve's question, but it's related to one I've always wondered about: if you don't have enough time to do all the analysis, design, and testing that's necessary, which should you skimp on? Will you avoid more problems by spending more time on design, or are you better off testing and fixing?
Obviously posing this question puts me squarely in the camp of there's not enough hardware on the planet that will make up for bad programming, just as there's no good programming that can overcome bad database design.
Mattie
August 17, 2007 at 6:31 am
I think we need to differentiate between performance tuning and development architecture/good coding practices.
In the case you mentioned, as people have already been pointing out, we are dealing with poor programming practice. It is not really a question of programmer efficiency. It shouldn't take the developer longer to code a decent join statement than nest three deep in fetches, given proper architecture and knowledge. If a programmer is allowed to continue poor practices the resultant load on the server will go up exponentially each time they release new functionality.
Now, in my opinion, performance tuning is what happens after a good faith effort has been made to follow good practice. Do some load testing and performance tune if issues appear. This may turn up poor programming practice, but that is not it's intention.
As an aside, the programmer may have felt that they were more effecient in this programming model because they had limited access to the development SQLServer. They mey not have been allowed to, say, create a stored proc on the server because "that is the DBAs job." That kond of thinking is just as much of an issue for proper "performance tuning" as poor programming practice.
August 17, 2007 at 6:33 am
If throwing more hardware at the problem is the solution, then lets just allow all the programming to be done by an infinite number of monkeys. I'm sure that between all the great novels they'll eventually type, there will be plenty of code that will execute. They are cheap to feed, don't require much in the way of family lives and should be able to be bred prolifically enough to keep a steady supply of 'programmers'. Best of all, this eliminates the issue of outsourcing!
------------
Buy the ticket, take the ride. -- Hunter S. Thompson
August 17, 2007 at 6:37 am
I guess I wasn't thinking of just bad code, but decent code you write to get a job done. We all could stand our code to be tuned to some extent, especially as workloads change. But not every application even ends up pegging a server at 100%.
I was trying to think if there's some balance between how long you should try to tune code before spending a touch more on hardware. Extra GB of RAM or another CPU
August 17, 2007 at 6:53 am
About four years ago we ran into a problem where a major piece of functionality had been developed with only minimal DBA involvement (we had a DBA on the project, but he was totally a yes-man to everything the developers asked for). So, the developers, in their infinite wisdom, chose to write all the database code using UDF's. That was UDF's calling UDF's calling UDF's. It was incredibly well written, elegant even. Of course, it ran like a rhino through the Labrea tar pits.
HUGE arguments ensued. The developers felt they had accurately used a tool made available by SQL Server and if it was running slow it was because the DBA's just hadn't put the correct index in place. Unfortunately, at the time, I didn't have the knowledge of SQL Server internals that let me know that UDF's had no statistics and therefore, worked wonderfully with one row and stink as soon as you have to retrieve more. After much pain, screaming, gnashing of teeth, etc., we completely redesigned and recoded the database.
The thing is, performance really depends on so many factors and how much performance is enough is constantly a dance. We've had Microsoft employee's through here telling us to not worry about database performance because after all, "you're not building Amazon.com." Then they give us code that runs so slowly we can't put more than 10 users on the system at a time and no amount of hardware is going to fix the issue. Unfortunately, Amazon.com or not, we need to support a couple of thousand simultaneous users (translates to about 20 simultaneous active database transactions at peek times).
We finally came up with a method of doing database only testing so that we can assure ourselves that the database performs "good enough." We don't go for sub-second response time on all queries, but we make sure that we can support the anticipated user load and scale for a year or two on the current hardware.
Also, as to throwing hardware at a problem, the other side of the coin regarding cost frequently isn't taken into account. The only person being paid is not the developer. There are also DBA's and Server guys that have to install and configure the system, then migrate the database(s) to the new machine, testing from QA to verify the new system. That's frequently a large amount of time spent by people that make more than your average mid-level developer.
Short answer... it depends.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 17, 2007 at 7:03 am
As noted by others, it looks like some of your assumptions about implementing/operating costs may need to be looked at for real world implementations. They aren't one time costs. You will have regular recurring costs for every server you deploy. And don't forget what it cost in manpower to monitor and maintain those servers.
You are also looking at this as if one extra server will solve your problems. If you have the kind of site/aplication that people will be seriously using, your user base is likely to ramp up over time, which means more and more servers. If you take the time up front, while the application and the code is still fresh in everyones' minds, to performance tune your application, you'll minimize your long-term costs. Better responsiveness now, so users like it better, lower complexity in your environment (fewer servers), lower software/hardware costs long-term, lower facilities costs (space, network equipment, and power), and lower maintenance costs.
August 17, 2007 at 8:24 am
I'd enjoy work more these days, if what Steve suggests were not true. Especially when folks write against databases that we designed and impemented 10 and 20 years ago, so the design is good, correct and efficient - then the boss can assign the guy writing Web Parts and Ajax, who will applied their bizarre procedural view way of doing things, all the while complaining that they want an object mapping.
One day I'll look at what they are doing, and say "you should have sent him to me, in 10 minutes we could have done this right". But I need to keep the SQL Servers running - something I didn't have to spend a lot of time on with Sybase. They don't want me spending time showing junior and intermediate programmers (or the worst - senior without relational db knowledge) how to do it well.
Because it works, and that's what counts. And in keeping my firm at the top of its field of business, it's actually true that it is FAR more important that I keep those servers RUNNING, and replicated and ready for DR etc - than that the underlying queries by done "right". It is indeed faster, easier, and cheaper to pay the server group overtime to install 64-bit boxes with iSCSI arrays.
As a business thing, I can see that it would be a bad business decision to have me spending time showing C++ programmers how its done, or to have me do what I'd like
to do - provide USPs for all access.
It's just a lot less fun.
Roger L Reid
Viewing 15 posts - 1 through 15 (of 36 total)
You must be logged in to reply to this topic. Login to reply