February 4, 2010 at 1:20 pm
I want to save a count of select statements (queries) that my sql server receives along with the datetime it occurred and the login id.
I will later be using this to create a report of average # of queries each hour from 5 AM - 5 PM for a certain set of login ids (I don't want sa or certain other logins to be included).
I looked at sys.dm_tran_database_transactions. It seems to show database_id = 2 (tempdb) in the results because I suppose every select is actually run in tempdb.
I also looked at Sys.dm_exec_query_stats. With this one, I would sum the execution_count column.
Because I need to capture all the transactions, I will have to setup a job that runs often enough so that a query does not execute, then get swapped out of cache due to high activity.
Alternatively, I suppose I can create an event notification for select and insert the results in a table.
Has anyone done something like this?
Steve
February 4, 2010 at 1:28 pm
Yes. Never the event notification - but I am thinking that would be the best method. The problem with the job, is that it can only fire every 10 seconds max. If you must catch all select statements throughout the day, then an event notification or a trace would be best.
Does this need to occur only for 1-day or on a persistent basis?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
February 4, 2010 at 1:45 pm
persistently. We create reports for average query counts per hour for the current month and for previous months.
If I did the job, I doubt that statements would be swapped out that fast, though, but I'm really not sure. I would need to monitor it for a few days and see when how long the transaction stays in the dm view.
Regarding the event notification, I am somewhat concerned about the queue filling up, but by running a job that calls a stored procedure often to read from the queue (maybe every hour), I doubt it would be a concern. I have done this for another task to log successful logins in a table.
I don't know what the queue size is for sql server 2005 by default. Again, it's something I would monitor for a few days and make adjustments as to the frequency of the job execution.
Steve
February 9, 2010 at 11:53 am
It seems I cannot use an event notification. There is no trace event I can use to create the event notification with.
I tried using some of the SQL trace events from Books Online and even some other trace events when I opened the sql profiler. They all come back with an invalid event type.
I will pursue using dm views once I can figure the right one(s) to use.
Steve
February 10, 2010 at 12:19 pm
Perhaps you could use a SQL Server Profiler trace that saves to a table, with minimal fields (perhaps TextData only, though I'd say add in Reads as well) and appropriate filters (TextData LIKE '%SELECT%', if you don't care about stored procedures) on the SQL:BatchCompleted event.
February 10, 2010 at 12:43 pm
be very careful about having a profiler trace write to a table... especially if it's a busy production server. Writing to a table or outputting to Profiler uses the SQL Trace rowset provider and incurs more overhead than the server-side trace and file provider (which is almost 0 overhead).
This is fairly well documented in MS Press book
'Inside Microsoft SQL Server 2005: Query Tuning and Optimization'
where a poorly-defined profiler trace caused a 50% drop in transactional throughput, whereas the same trace defined as a server-scoped file-based trace was 0% drop.
Set up a server-side trace for rpc batch completed and sql batch completed.
Have it save to a sufficiently large file to catch all the activity.
You may also choose to scope it to a specific user DB.
use this to select the file results into a table when you turn off the trace :
SELECT * into [dbo].[TraceResults]
FROM fn_trace_gettable('\\path\to\trace\file.trc',1)
then you can select to your hearts content, although It's pretty hard to group by on the textdata column.
Craig Outcalt
February 10, 2010 at 12:48 pm
An excellent comment, SQLBOT.
Note that grouping by the TextData column is very easy if you first convert it to a VARCHAR column, and then spend the time to 'normalize' that column with UPDATE statements based off the following:
UPDATE TraceTable
SET TextData = 'SELECT A FROM B WHERE A = ''%'''
WHERE TextData LIKE 'SELECT A FROM B WHERE A = ''%'''
Note that some characters that look like spaces aren't, so replacing spaces with underscores can help too.
February 10, 2010 at 1:53 pm
tracing doesn't do it because I need to constantly count # of queries of certain users, not for one day, but always.
It's used to report on trends of # of user queries submitted over several months and hours of each day.
February 10, 2010 at 2:31 pm
sgambale (2/10/2010)
tracing doesn't do it because I need to constantly count # of queries of certain users, not for one day, but always.It's used to report on trends of # of user queries submitted over several months and hours of each day.
If you really want accurate results, I'm not sure there's a good way unless you either run a continuous trace in Profiler, or you run a packet sniffer at the network layer continuously. Anything based on the cache that's run repetitively can miss statements that get flushed from the cache quickly, for instance if your server is under very high utilization.
February 11, 2010 at 4:24 am
sgambale (2/10/2010)
tracing doesn't do it because I need to constantly count # of queries of certain users, not for one day, but always.It's used to report on trends of # of user queries submitted over several months and hours of each day.
you could scope your trace to just those users and auto-start it. If the activity for those users isn't much, you could be very happy with that option.
Querying the cache is a pretty unstable solution. I would avoid it if accuracy is important.
Craig Outcalt
February 11, 2010 at 8:09 am
I suppose I could auto start a trace, but it's not worth the extra hit on the server for something that is of fairly minor importance.
Too much work and processing for this.
Thanks,
Steve
February 11, 2010 at 8:21 am
just in case you missed it in the earlier post, a server side trace incurs little to no impact on performance.
Still, it's a big project with lots of moving parts no matter how it gets done.
It's always the little things, isn't it :hehe:
Good luck with it.
Craig Outcalt
February 11, 2010 at 8:39 am
Craig,
This is a very good solution on second read. I appreciate the comments. So if I set up the auto trace on the server itself and have it output to a file, then read the file as needed (maybe daily), it should minimal overhead? About 8 years ago I did a profiler trace and it nailed performance of the sql server. When we upgraded to a server with faster cpu and memory, I could run it without a problem.
At this point, I will revisit this in a month or so, while I do write other reports such as finding, executing and reporting the run times on long-running queries so I can address them and make performance improvements.
We do this for an Oracle DW database that is over 1 TB and it helps with seeing if those queries are running well (we run the queries every 2 hours).
If I have any questions on the right way to setup the trace, I'll post here.
Thanks,
Steve
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply