October 14, 2009 at 10:58 am
Is it possible to save results of a sql trace directly into a db table (without saving it to a file)? I know this is possible using profiler but i want to know if it is possible using T-SQL (sp_trace_*) functions?
October 14, 2009 at 11:07 am
You should be able to use the fn_trace_gettable function to load data directly into a table.
I have an automated profiler trace that kicks off every morning and I use something similar to the following to load the data into a table for further analysis:
SELECT * into SOMETABLENAME
FROM fn_trace_gettable('F:\SQLServer\ProfilerTraces\trcStatements20091014_ProfilerTrace_0.trc', default)
order by [duration] desc;
You can use SELECT * FROM ::fn_trace_getinfo(0) to get the information about what active traces are running.
Let me know if that answers your question or if you need more detail.
October 14, 2009 at 11:08 am
no
you can write to a file and import to a table easily. You can write a import script for it and have everything scheduled in jobs. Job 1 start trace, job 2 stop trace and start trace with a new file, Job 3 Import Trace files to DB or something similar
October 14, 2009 at 12:26 pm
Thanks guys, i was thinking along the same line but was wondering if there is another way to do it.
Actually i want to monitor the performance of the server by capturing the following data on daily basis
1-How many total queries ran per day
2-Maximum duration, cpu, read, write of a query per day
3-How many queries ran under the bench mark duration (1 sec in our case) per day
4-How many queries ran over the bench mark duration (1 sec in our case) per day
So i am planning to setup a trace for all (Stored Proc and TSQL batch) exections using the sp_trace_* functions and saving the above 4 aggregated results in a table.
So now my other question is does this approach seem right? i am concerned about the load this trace will add on the server, also is it a good practice to run this kind of trace on regular basis?
Appreciate your help.
October 14, 2009 at 6:45 pm
just make sure you write the trace file to a drive that does not have data files or log files on it. I typically use the backup drive. Then do a straight import to a staging table then populate your summary tables 🙂
October 15, 2009 at 9:14 am
A couple of things to add:
1. Here's a free tool you can use to aggregate the data for you: http://www.scalesql.com/cleartrace/. I use it to load my trace files into a table and then view the aggregated data. It works well for what I need and the price is right. And I believe there's a way to do it via command line, I just haven't looked into that yet.
2. When you set up an automated trace, make sure you limit it to only the events, columns and data that you need. I typically filter out the system SPIDS < 50 and batch jobs that I'm not interested in. I also make sure to configure a max file size and enable file rollovers. As long as your careful about what you're profiling, the impact to production should be minimal.
October 15, 2009 at 11:14 am
Thanks guys, appreciate your help.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply