January 3, 2014 at 10:43 am
QUESTION about Staistics IO on:
1) is the info provided when SQL statement is run after Set Statistics IO ON the same as data collected by Profiler Trace? (Reads, for example, -- is it the same as Logical Reads shown by Statistics IO on?)
In other words, I am trying to figure out what else I need to show after optimization applied as metrics showing improved performance.
For example, If I run Profiler Trace, should I just point out that Avg(duration) and /or Avg(Reads) for same SP is a lower number compared to previous Profiler Trace.
(see, this is a peculiar situation, I was given a general task to improve SPs performance on the server but was not told what user processes they are initiated or what area of application(s) is slow or under-performing in any way)
, so what I did I ran the trace and simply identified the top 30 SPs that are often executed and sorted by Count*Avg(duration) Desc to see which ones take most time on the server. And my improvement was targeted around those SPs and tables/indexes that SQL in those SPs accesses. I have all that trace data and analysis saved.
Now, upon applying the suggested changes in Production, I want to show certain improvement. I will use then the same metrics from new Profiler Trace)
2) Or should I still test-run each individual SP BEFORE and AFTER modifications with
Set Statistics IO on?
Likes to play Chess
January 18, 2014 at 5:55 am
Hello Voldemar,
as far as I know the data of the profiler and the statstics-io-output is the same.
When tuning I sometimes use profiler, but just for a single procedure. I then like to get shown each single statement of the procedure and have a look there at reads and duration. You can get the execution plan for each single statement too.
But: for general analysis but also for tuning I preffer the management views.
1) sys.dm_exec_procedure_stats
Here you find different values of procedures that are in the cache. Have a look at books online and check out all the values.
Minimum, maximum and average reads, elapsed time, processor time... That are quite good values to monitor, to order by and to compare afer having optimizes procedures.
With a join on sys.dm_exec_cached_plans you can even get the execution plan in the result set.
2) sys.dm_exec_query_stats
Quite the same values as mentioned above, but for each single query in the sql server cache. Filtering this view you can get the single queries of a stored procedure.
You should prepare a query using these views that puts out all the values that you are interested in.
I personally have a look at these statistics, write them down, do my optimization and compare the new values to the old ones.
January 18, 2014 at 2:21 pm
Thanks.
As far as what you said about the execution plans, <You can get the execution plan for each single statement too> --> but won't this exec plan for each single statement in a Stored Proc be the same as you would see when you run the entire SP with test parameters, etc.?
I mean, why would you want to get plan for each individual statement in an SP?
Likes to play Chess
January 18, 2014 at 2:35 pm
In general: when wathcing the execution plan of the whole procedure it's quite the same than watching the single statements. Sometimes I preferr the single statements because it's easier to watch if the procedure is quite large.
As for the test parameters: it depends.
As you might know an execution plan for the same query can change depending on the values you pass in for the parameters.
Many times you don't know exactly which values an application will use when calling procedures.
When testing a procedure calling it in SSMS yourself you might get the same execution plan when be called by an application but it's not a must.
For this I like to watch the dynamic management views. There you can see which execution plan is acutally used.
January 18, 2014 at 2:59 pm
Would you say DMV data and Exec Plans would be enough to show the management
AFTER I apply certain optimization to a dozen of stored procedures in production server?
I mean to show as a 'proof' of success of tuning, and as a comparison between 'Before' and 'After' values to show that the performance improvement has actually taken place?
Or DMV and Exec Plan info would not be sufficient to rely upon in terms of showing definite positive results of the applied optimization ?
Likes to play Chess
January 19, 2014 at 3:24 am
Execution Plans and the key numbers out of the management views are a very good and not too difficult start. Of course things like RAM or CPU are other aspects that can influence performance of your procedures.
But for the beginning it's a good way with profiler and the DMVs as the key values are quite easy to read and compare:
Small number of reads after is better than high number of reads afterwards.
Small cpu time after is better than high cpu time before.
And yes, it's quite easy to log the values and make kind of history.
So you can say: yes, I optimized the procedures.
Of course you need some time to understand and interprete all the values, just try and learn by doing 🙂
By the way: the mentioned management views only contain the procedures and statement currently in the procedure cache. As you might know the procedures will not stay there forever.
So you should log the values regularly.
Another good thing is - in addition the the profiler - skript the trace definition and create a server side tace that can be saved to a file. This way you won't miss any procedure call and can analyze the values.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply