February 3, 2009 at 12:55 pm
:w00t:I have a 3rd party app/db running on client's db server. DB Desing from hell (can't change) Server is getting buried, perfmon 40-80 %, worse today than ever before. I can run profiler/duration and return hundreds of rows in a minute taking more than 2000 MS. Blocking is intermittent. ANy fast way to make sure it is not just one statement/connectionthat is part of this issue
February 5, 2009 at 2:08 pm
If you have the profiler data you certainly have an excellent start.
It is not really just matter of those running the longest (though that is a good starting point) you really should look at the data as a whole and try to aggregate the data within that trace to group by specific procedures.
It is possible that you will find something that has run 1,000,000 times in that time frame and only took 1 second to run each time but that because of the massive number of executions it is the primary culprit. At the same time it is also possible that you have one item that runs for 50 seconds but uses every table in the system for that time period and therefore uses the most IO/causes the most blocking/locks the most rows/etc...
Anything is possible, but if you have the data you should be able to determine
Slowest Query by Duration
Slowest Query by Average Duration
Query with most Total Reads in a single execution
Query with largest Average number of reads per execution
Query executed the most number of times
Query with most writes
Query with most average writes per execution
It is also quite possible that there is no "silver bullet" query and that you will have to tune a number of queries to get things running the way you think that they should run.
Any of the above metrics should help get you pointed in the right direction but you may have to do many of them to get the system back to where it needs to be.
Those are pretty eas
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply