July 13, 2010 at 4:30 am
I need to run a server side trace on a production server. I need to gather info on the resources used by this trace.
As im not using profiler i cannot just use perfmon to monitor the profiler executable. so, how do i monitor the memory and cpu usage of the server side trace from within sql server?
July 13, 2010 at 5:00 am
To tell the truth, I've never worried too much about resource usage with server-side traces. I'm more interested in how many GB per hour the trace is likely to generate, and this is a product of having a good filter. If your server is running at a level where a server-side trace is too much for it, then a very coarse filter will pick up plenty of performance issues, and once these are solved, you should have capacity to run a finer trace.
For a perfect answer, I'd recommend monitoring your server with the trace on, and then monitoring it again with the trace off, and averaging the results. Unless you are collecting 1 GB per minute, or the trace file is in contention with your data file or your log files, you may not even notice the impact.
July 13, 2010 at 5:40 am
winston I'm in the same camp of not worrying about the trace impact after this article about the near-zero impact of a server side trace where someone actually compared the throughput compared to no trace vs profiler traace vs server side trace.
take a look at how the graph from that article says the server side trace hardly affects the transactions per second: only when he starts emulating 300 users doing 1200 transactions per second do you see any deviance form no trace at all vs server side trace.
Lowell
July 13, 2010 at 5:54 am
i have read that article Lowell, but I completely disagree with profiler having negligable performance hit, as i have brought a prod server ( very healthy one too) too its knees with a profiler traace before. I admit i didnt use the correct filters, but even so, i got burned and so experienced first hand the issues of ignorance of performance effects of trace can sause
July 13, 2010 at 6:01 am
profiler = heavy impact i agree, but a server side trace I still believe has negligible/almost nil impact; that was what i was trying to say.
i've placed both a DML trace and an logon audit trace on all my servers, under the (possibly mistaken) belief that the information it can provide is valuable; we get a lot of whodunnit fingerpointing in my shop that is part of the excitement here.
until i see an adverse impact, i'll keep leaving those traces as a startup script when SQL starts.
even better, the dml trace keeps track of slow running queries, so i can use it to see anything that appears to be running slow outside of querying the DMV's for the same info.
Lowell
July 13, 2010 at 6:23 am
It completely depends on what events and columns the trace is collecting. Most people are running very benign traces capturing RPC & Batch Completes with only 8-12 columns of data. They may never see ANY evidence of the trace running on the server. It's when you start capturing lots and lots of information through a trace, execution plans and statement completes and others, that you'll see an impact on performance.
I would absolutely advocate running the trace without regard to worries about impact, assuming you're limiting the number and type of events, you're outputing to file, and you're not using the GUI. We run 24/7 on some of our servers, and have done for years, without ever having any issues, apart from having to deal with all that data.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply