October 21, 2014 at 1:48 pm
Hello all,
I am looking for alternatives to packaged software to get a quick overview of a given SQL environment. Historically I've been used to 3rd party software like Spotlight/Performance Analysis/SQL Monitor/etc. to do most of the easy initial overview work. While I am privy and comfortable with all elements of WMI/TSQL, traditionally these take a bit more time to properly configure and retrieve results from.
My goal is to to discover a new method/free software/pre-written powershell scripts that give a DBA a very quick initial analysis of the health and configuration of a given SQL environment without a lot of TLC.
Thanks!
October 21, 2014 at 3:16 pm
Thanks for the response! Yeah I've used some of their material before, quite nice!
Ideally though, trying to also include the O/S. In a way, kind of recreating what a product like Spotlight does. A very quick, easy to digest, snapshot/dashboard of an environment's health and status. I know there's a multitude of different ways to skin this cat individually (from the O/S to SQL); however was curious if anything out there existed that was a unified solution to provide a point in time "here's everything you ever needed to know about current health of the box".
Thanks!
October 22, 2014 at 1:11 am
Take a look at what Kendal Van Dyke has been working on, it might help.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 4, 2014 at 2:40 pm
Thank you Grant. This is definitely good from a discovery & inventory perspective; however my current goal is more so to target performance.
I just dusted off the old SQL Nexus tool that I haven't touched in ages. It's not bad, but would have expected it to evolve much more at this point; however that's the type of thing I'm looking for. A quick way to look at a given instance/server's performance without spending too much time configuring, setting up and generating reports.
November 5, 2014 at 8:09 am
Adam Bean (11/4/2014)
...however my current goal is more so to target performance.
Hi Adam,
Assuming that you're talking about the performance of SQL Server, I've found that most people look in the wrong spots. Yeah, you can have a SAN or other hard disk system setup that has a problem or improper physical connections to the box or a memory starved server, etc, etc, but that's not normally where the performance can be found. You can even have some pretty terrible database designs and some horrible default TempDB settings.
All of that is certainly important but, when you jump into a new server, I've found that you can spend a whole lot of time on those things above only to find that after weeks and sometimes months of work (a lot spent on convincing people of the problem and maybe justifying additional hardware expense) you might have made only 2 or 3 times the performance improvement (and is possibly still grossly unsatisfactory) or maybe no improvements at all. Or, like some of my clients, the machine is provided on some cloud somewhere (not Azure but full SQL Server instances) and there's not much you can do about things like hard disk storage, etc, and simple tweaks like those made to TempDB or some of the various Trace Flag settings really don't provide much improvement at all.
So where's the place where the greatest improvements can be made?
It's as I've always said... "Performance is in the code". At a recent job, I was able to find the code that was the biggest problem and fix it then find the next piece and fix it. Wash, rinse, repeat. Sometimes the code was ad hoc or prepared code and needed folks to change the managed code but it was fixed.
Much of that code required only simple fixes like making criteria sargable or simply adding an index. Sometimes the code need a partial rewrite to take advantage of "Divide'n'Conquer" methods. A lot of the code didn't look like a problem to most (example, runs in less than 900ms, "only" uses 800ms of CPU time, and comes back very quickly) but turned out to be most of the top problems because it was being executed 30,000 to 120,000 times just in the normal 8 hour day. Multiply the "small" amount of CPU and duration by the number of times they ran an you can see why the very latest but only 4 process box was having problems (one of the queries was consuming 6 hours of cpu time in just 8 hours. That's more than 1/8th of the total CPU available on a 4 processor box). Add into that that many don't even look at logical reads (memory is fast, right?) and didn't catch that each of those queries was doing an insane 140,000 reads each run. 140,000*30,000*8192 turns out to be a little over 34 TRILLION bytes of I/O that just didn't need to happen (and that was just ONE of the queries). It's no wonder that machine was slow.
To make a longer story shorter, we found 20 such queries, some of which mutually benefited (8 queries was the largest group which greatly reduced the rework required) from the addition of a single index and some that needed to be changed but the performance and resource usage changes were remarkable. CPU usage was decreased by 2,000X, logical reads decreased by 2,777X and, of course, duration dropped by similar amounts and suddenly the system was working a whole lot better.
Like I said, "Performance is in the code".
How can you find such queries? I wrote my own ditty to find such things (I call it sp_ShowWorst) but you don't need that my stuff. Adam Machanic wrote one (I can't remember what it's called but someone on this thread will know) that does even more than mine and I highly recommend folks use that. If you want something immediately, then you can use some of the built-in standard reports ("Performance - Top Queries by Total IO" and "Performance - Top Queries by Total CPU") that are available by right clicking on the instance in SSMS and then clicking on "Reports, Standard Reports". Those will show you the top 10 worst in each category AND will also show you the code for what SQL Server thinks is the worst part of multi-statement things like stored procedures.
Yes... it's important that hardware and SQL Server be setup correctly but you're not going to get huge performance increases from all of that. Go for the jugular on performance problems. Identify the problem code, bite the bullet, and fix it. 3 orders of magnitude in performance gains and reduction in resource usage can only be accomplished by fixing the code. If you run into a hardware problem while doing that, then consider upgrading or fixing the hardware but I wouldn't do that as a first effort.
"Performance is in the code".
--Jeff Moden
Change is inevitable... Change for the better is not.
November 5, 2014 at 10:01 am
Hello Jeff,
Thanks for the reply! We've chatted in the past 🙂
I couldn't agree more with your statements; however I'm really looking for something relatively simplistic without having to recreate the wheel and/or spend a lot of time in setup and analysis. I've become a bit reliant upon 3rd party software to assist with reactive monitoring. While I have a fairly extensive suite of custom built procs/views to help with on demand troubleshooting and data gathering, I lack the "full picture" quick synopsis that a tool such as Spotlight provides.
We all know 9 times outta 10, a deep dive is inevitable no matter what. Yet my current goal is to be able to quickly and easily grab a synopsis of a given server + instance. All the core windows and SQL stats & configurations, without having to spend a lot of time on setup (build traces, run X procs/views, export, build graphs and detailed reporting, etc.). Really just a quick and easy one push, pre-built, configurable tool such as that of PSSDiag in conjunction with SQL Nexus.
Something may not exist at all and have to continue to resort to the tried and true methods of profiler & DMV's; however figured I'd ask as I haven't traversed this road in quite some time.
Thanks
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply