September 20, 2014 at 1:05 pm
Comments posted to this topic are about the item Premature Optimisation
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 22, 2014 at 1:42 am
Thank you Gail for this nice piece!
It in fact highlights a constant battle, challenging pinhole view, missing metrics, personal opinions etc., things which far to often, but wrongly, drive optimization efforts.
😎
September 22, 2014 at 6:42 am
We can all get our dose of common sense to start the week. Great article!
September 22, 2014 at 7:46 am
Great article again Gail. You have helped me and countless others so many times over the years.
September 22, 2014 at 8:26 am
Outstanding article, Gail. I especially appreciate the section of the article on what pre-optimization is not. Very well done and thanks for taking the time to write the article.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 22, 2014 at 9:32 am
Thanks for the great article Gail.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
September 22, 2014 at 11:40 am
Thank you Gail,
Wise words, always worth to stop and think before just start coding. Sometimes jumping to optimize is just a desperate measure caused by the pressure, but it requires serenity to work smarter.
Best regards
September 22, 2014 at 11:40 am
Good article - writing good code with a good design is never premature, and measurements is the key.
When I do optimizations that are beyond simple good code, I try to record the metrics (SQL Profile reads, writes, CPU and duration) as comments right in the code with the query I changed, listing the performance change I recorded as well as the SQL Server version that appeared on, so future developers have a better idea of why non-obvious choices were made, and they can undo them and test to see if future versions of SQL Server still need those changes.
I personally put a more pointed and specific emphasis on aggregate runtime cost; aggregate query cost per day is usually what I use for commonly run queries, sometimes split between peak time cost vs. non-peak cost. Alternately, considering any performance guarantees/SLA's is always important, particularly when someone creates non-SLA restricted queries which put load on the system which in turn degrades the performance of other queries that are subject to SLA's.
And thus I digress.
September 22, 2014 at 12:15 pm
Does your SQL Server suffer from Premature Optimization?? There is a drug SQLialis that helps with this!!
Joking aside...excellent article Gail. Thank you for another excellent contribution of your knowledge to the SQL knowledge base.
September 22, 2014 at 12:23 pm
A personal major hurdle for myself is knowing something could be better and leaving it alone. OCD? Maybe. It keeps me up at night if I let code make into production knowing it could be more efficient...even by 5%.
Aigle de Guerre!
September 22, 2014 at 1:36 pm
I thought it was just decreasing the waitfor command by 2 or 3 seconds to gain performance. 😀 Just kidding.
Thanks Gail for the informative article.
September 22, 2014 at 3:39 pm
Environment is everything. Check the big picture before checking the small picture.
Developers will tell me procedure XYZ is timing out on a customer's site and needs optimized. The first thing I ask is what are the Big statistics for the host machine for disk, memory usage, and cpu when the procedure is not running. It generally ends up being something IT has done such as implement a crazy rule like "do full backups of all SIMPLE MODE databases during the day". Well, I have 50 GB readonly databases in SIMPLE Mode that never need backed up.
SqlServer is a system of many background processes, agent jobs, users, all affecting each other. Also most servers are virtual and fighting for minimal physical resources such as one RAID 5 subsystem; this prevents you from knowing the actual available resources in many cases. Often it is much cheaper and faster to add a couple RAID 1 disk channels than to optimize every procedure for equal gains.
September 22, 2014 at 11:09 pm
Excellent article Gail and I rated it accordingly!
I particularly like you analogy to sunk costs. Surprising how few people understand that basic economic principle.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
September 23, 2014 at 1:53 am
Great article.
Speaking of scientific method, the most frustrating part about profiling and measurement on databases is how inconsistent it is because there are so many variables you can't control for. Reproducibility and reliability of experiments is often completely absent.
Testing on a DEV/TEST machine never matches PROD due to load, hardware differences, database settings, data volume differences, other queries running, you name it.
You can't just take 3 TIME IO measurements and average them and say that a faster query is necessarily better than a slower one. The slower one might use fewer threads and use less resources overall in a production situation for example.
It's just so hard to reduce it to a simple set of variables. I'm not talking about the low-hanging-fruit like fixing non-SARGable queries. Sometimes it just seems like a dark art.
September 23, 2014 at 6:33 am
Thanks everyone. I was expecting massive disagreements, lots of shouting and insults. Pleasant surprise.
Davos, I would argue that you can approach performance tuning scientifically. No, it doesn't boil down to a single variable and no, production is never going to be the same as dev. You can still, in your testing environment, fix as many of the variables as possible and run the tests, make the comparisons and draw conclusions.
It's pretty unusual that a slower query use less resources. Unless it spent that extra time waiting for something, that extra time was time running, using resources. Plus a slower query will be holding locks longer, possibly interfering with other queries for longer, etc.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply