January 7, 2016 at 7:29 am
Hi everyone! I would like to get some ideas on if you guys have any experience creating performance baselines against your databases? I'm just wondering if there are any standard methods out there or standard metrics that I should focus on?
I imagine there are third party tools out there to do this but I will not have access to anything like that and would need to do it with code or from within SQL Server.
Thanks in advance!
January 7, 2016 at 8:12 am
You can absolutely do it on your own. Performance Monitor, perfmon.exe, is the first thing to start with. You can capture system and SQL Server specific metrics from that. Next you'll want to capture wait statistics using the dynamic management view sys.dm_os_wait_stats. After that you can capture query metrics. There are dynamic management views that can help and I strongly support using extended events for detailed information capture. All these are native tools built right into the product. They'll do what you need.
As to a list of counters, etc., that's why I wrote the book in my signature. I've got chapters outlining how to capture baselines and which metrics matter.
However, a free solution would be to get a copy of the book "Accidental DBAs Guide to Troubleshooting." It's free to download and it also has a good couple of chapters on capturing performance metrics. Another book that's free to download that will be helpful is "Performance Tuning with SQL Server Dynamic Management Views." It's a little aged right now, but still very useful, especially if you're working primarily in SQL Server 2008.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 7, 2016 at 8:47 am
Awesome answer thank you very much! Its great to me that even with working with SQL for so many years that there is always something new for me to learn how to do. 🙂
January 7, 2016 at 4:16 pm
Heh... so far and without exception, I've found that the hardware is just fine. I have seen just twice where something went wrong (one was changing a router from a fixed speed to auto-negotiate and another was a new cable that was both too long and had a too-small radius bend in it) and was thankful for the baseline but, except for things like that, people look at the baseline and think everything is fine and don't really do anything about performance because nothing looks like it's broken. And, so far and without exception, nothing could be further from the truth.
Data grows and grows and then, one day it happens even after monitoring baselines forever. The system goes nuts and the "floor" goes down. It's not because of anything you've done, its because of what you haven't done. You accepted the "baseline" and did nothing for performance and true performance lives in the code. When the "tipping point" is reached, it's nearly impossible to "save it" without doing something drastic. I'll also tell you that, with the occasional exception of adding more memory to the system, throwing even sophisticated hardware at a solution is, at best, a temporary solution if it provides any relief at all.
Now, I agree that you DO need a way to measure the baseline even on a daily basis but not so much as to see what's slowly degrading. Rather it should be used as a method to see that your performance improvements in code are actually working over time.
My point is that Baselines are where many people stop. Don't stop that. There are many good scripts to find the procs, scripts, ad hoc, and managed code that take the most CPU, have the most reads, take the longest to run, etc, etc, etc. If you don't have a "Continuous Improvement Program" chartered and active, then the Baselines are going to be just pretty squiggly lines that will document your failure for you when things go bad.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply