Identifying long running queries without using a trace

  • I work in an Admin/Support team for internal apps for a mid sized multinational.

    I want to start optimizing a number of systems, and one step is identifying long running sprocs/queries.

    The problem is managment think all traces kill server performance and should not be run in production. Unfortunately its impossible to replicate the production load in a test envrionment.

    My two questions are:

    is there a way to do this without using server side traces/profiler?

    my plan is to try asking again to run a server side trace, just for completed RPC 's and statements that run longer than 7 seconds for 1 hour, 4 times a day , moving the trace files to a different computer after each time to ensure space is not taken up in production. Is this worth a try? is 4 hours per day enough, when spread out at even intervals?

  • I would run the Batch Execution Statistics, Object Execution Statistics, and Top queries by average IO/CPU server reports. These reports will not give you the duration, but at least will give a list of the procedures or queries that have a lot of IO. Normally if you have a query that takes a lot to run then it will have a lot of IO. After you identify the possible ones, then you can run them manually to see how they behave.

  • Not with 100% accuracy. You can get close with the DMVs, but there will always be stuff that you miss. sys.dm_exec_query_stats is a reasonable place to start.

    Worth maybe pointing out some articles on server-side traces to the people who are worrying? And noting that there's a trace permanently on all SQL 2005/2008 instances by default? Profiler (the GUI) can get very nasty on busier systems (I've taken down a server with it. Twice. In one day.) but the server-side traces are extremely light-weight, providing the events are limited and the destination is a fast local drive.

    4 hours a day is more than you need. I do a lot of performance tuning and I usually take 2-3 half hour traces at different times of day across 2 or 3 days (corresponding to high usage) and work off of those. 4 hours on a busy server is going to give you a LOT of data to analyse. Keep it small, keep the durations short. 30 min-1 hour should be good to start with.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply