Server is sluggish

  • We have a server which slows down to every once in a while unexpectantly. I would like to set up profiler in the background but I do not want to add additional overhead on the server because it already is showing performance issues. Can someone help me set up profiler to capture a steady trace and not add to the performance issues? Maybe the trace can show me what is happening when the server is sluggish to the users.

  • Explore "server side tracing".

  • You can generate a trace script from profiler. All you'll need to do with the script is set an end time and a file name. Script to a fast drive and not one where the DB's data or log files are. You can then run that script either from query analyser as a once-off, or create a job to run it.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Can you suggest what event selection items to use? Or which template? I have the trace file going to a mapped drive - not to the server.

  • barb.wendling (11/28/2008)


    Can you suggest what event selection items to use? Or which template? I have the trace file going to a mapped drive - not to the server.

    I would recommend against using a mapped drive because the server-side trace will be running under the SQL Server service account and it is unlikely the mapped drive will be available. You should use a UNC path or local drive for the trace file.

    As for events you will want SP:Started, SP:Completed, TSQL:BatchStarted, TSQL:BatchCompleted, Lock Events and the Deadlock events.

  • The SQL Service is running under a domain account and the mapped drive has been modified to allow the domain account permission to write the file there. This has been successfully completed.

    As to the actual events, don't I need to know which database, user, etc? or just strictly the event items you specified will tell me what is slowing down performance. Can some calculations be evaluated if I save this to a table vs. a file? Thoughts?

    Today is a SLOW day, so I would like to get it set up and tested by Monday, which I expect to be heavy usage - 1st of Month and a Monday. Thanks for your response...

  • Even if the SQL Service account has access to the mapped drive, if you are accessing it via drive:\folder then you are not guaranteed that it will work. I believe drives are ONLY mapped when the user is logged into the machine, while a INC path is always available whether or not you are logged in or not.

    As to the actual events, don't I need to know which database, user, etc? or just strictly the event items you specified will tell me what is slowing down performance. Can some calculations be evaluated if I save this to a table vs. a file? Thoughts?

    Database and user are columns that are included as part of the event. Not all events include all columns. The events I specified are events that are created that will show you what is happening within the SQL Server. The :Completed events will show the duration and resources used by each event so you can see what events take the most time and/or resources.

    For server-side traces you need to save to a file or files. You can then use fn_trace_gettable() to import the files into a table to do aggregation, etc... with T-SQL.

  • I have set up a remote server side trace and selected to write data to a table on a development server. Below are the columns created. 60,000 rows inserted in a few minutes...What would I be looking for from this dataset to indicate the issues?

    SELECT [RowNumber]

    ,[EventClass]

    ,[ApplicationName]

    ,[ClientProcessID]

    ,[DatabaseID]

    ,[EventSubClass]

    ,[HostName]

    ,[LoginName]

    ,[LoginSid]

    ,[NTDomainName]

    ,[NTUserName]

    ,[NestLevel]

    ,[ObjectID]

    ,[ObjectType]

    ,[SPID]

    ,[ServerName]

    ,[StartTime]

    ,[TextData]

    ,[Duration]

    ,[EndTime]

    ,[CPU]

    ,[Reads]

    ,[Writes]

    ,[BinaryData]

    ,[IndexID]

    ,[Mode]

    ,[TransactionID]

    ,[IntegerData]

    FROM [DBARepository].[dbo].[table_trace_12012008]

  • Well, if you included the deadlock events I'd look for any of them first. Then I'd look at duration, cpu, reads, and writes. I forgot to mention before but including table scan events would be a good event to add.

  • The table has an ID for EventClass, where profiler display has a name...how does TSQL display the name and what is the ID of deadlock events?

    The query below yields 39 rows- all EventClass 12, all Writes are 0 and the CPU and Reads are currently minimal? (do these numbers represent seconds?).

    SELECT *

    FROM [DBARepository].[dbo].[table_trace_12012008]

    WHERE HostName NOT IN ('A585') -- my workstation

    AND (cpu IS NOT NULL OR Reads IS NOT NULL OR Writes IS NOT NULL)

    ORDER BY RowNumber

  • barb.wendling (11/28/2008)


    Can you suggest what event selection items to use? Or which template? I have the trace file going to a mapped drive - not to the server.

    I would strongly recommend against the use of a mapped network drive. A trace should be written to a fast local drive, otherwise it can cause performance problems on the server that you're tracing. If all the trace events have to be written across the network you'll be seeing tracewrite waits, as other queries wait for the trace buffers to be written.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • barb.wendling (11/28/2008)


    I have set up a remote server side trace and selected to write data to a table on a development server.

    Since a server-side trace can't write to a table, only a file, I'm guessing you're using the profiler GUI. You should be aware that the profiler gui with the write to table option is the most intensice way to trace and will put the most load on to your production server of any of the ways to trace. I know people who have brought their servers down by doing that

    The table has an ID for EventClass, where profiler display has a name...how does TSQL display the name and what is the ID of deadlock events?

    There's a table somewhere in books online that lists all the events.

    The query below yields 39 rows- all EventClass 12, all Writes are 0 and the CPU and Reads are currently minimal? (do these numbers represent seconds?).

    Reads and writes are in pages, duration and cpu are in milliseconds.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I am writing the data to a remote server, not the server having the sluggish performance. I hope to have this trace recording for a few minutes only when I receive notice that the server is performing poorly.

    The traces I have set up today helped me understand the event selection and data results. I am now able to capture CPU intensive records, long running procedures and errors. Hopefully, this will point me in the direction of why the server is having issues. Then we can fix the problems.

    Thanks to all who helped me with this issue.

  • barb.wendling (11/28/2008)


    I am writing the data to a remote server, not the server having the sluggish performance.

    Don't! The trace should write to a fast local drive on the server that you are tracing. That is the best way to run trace and will have the least impact on the server.

    If you force SQL to write the trace across the network (which will be slower than the disk) you will be causing the trace events to backup inside SQL server and you may cause worse performance problems than you already have.

    As I said, I know people who have done exactly what you're doing and have brought their servers down by doing so.

    A server-side trace (not the profiler GUI) that's writing to a fast drive on that server has minimal (almost none) impact on that server.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • OK, so select the Save to File and write it to a local share. Do I enable File rollover? Do you have recommendations on what should be the max file size (MB)?

    Can you help me with the load? Isn't there a proc to load the data?

Viewing 15 posts - 1 through 15 (of 30 total)

You must be logged in to reply to this topic. Login to reply