June 23, 2005 at 8:54 am
I have been asked to setup a trace for auditing one of our SQL Servers and I have a few questions about how it's supposed to work.
I start the trace on the server using a script:
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 5
exec @rc = sp_trace_create @TraceID output, 2, N'd:\mssql\TRACE\audit-changes', @maxfilesize, NULL
.....
exec sp_trace_setstatus @TraceID, 1
Now I want to look at what it's collecting, but I have to either close the trace or wait until it hits the max file size and creates a new file. I've tried opening the file using the Profiler and using fn_trace_gettable. Is there a way to look at the data while it's collecting?
Also, I found instructions on how to make this start when SQL Server starts, but it looks like it will overwrite the previous trace files unless I script a file name change. I'd like it to go to the next file as it does with the file roll-over option. I suspect that because I'm redefining the trace each time I'm just out of luck.
Thanks in advance for your help.
- Nancy
June 23, 2005 at 9:32 am
Just wait for the trace to roll over and you can get the data.
We created a new file name each time we started it, using the date of the start as part of the file name. We're running a simple trace to collect transactional data, all day, every day, and it doesn't hurt performance assuming you keep a reasonable set of counters & events. Great resource for troubleshooting and reporting top running queries, etc.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 23, 2005 at 9:51 am
The only problem with waiting is that we're not collecting tons of data. Our main goal is to audit the application vendor and maintenance work, which should generate only minimal traffic. We're also auditing critical data changes such as account and user tables, which should happen once a month per user/account (password/pin changes). The application monitors traffic through itself, so we see no need to duplicate that.
I'm thinking my best strategy is to stop and start the trace every day, which will probably require removing the definition and recreating it so I can change the file name.
Thanks,
Nancy
June 23, 2005 at 9:57 am
We stop & start ours daily (not because we have so little data, but because we have so much). If you just set the file name using the date, you won't need to worry about moving yesterday's file out of the way each time you restart.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 23, 2005 at 10:00 am
Is there any way to change the file name once the trace has been created, or do I need to do an sp_trace_setstatus @id, 2 and then sp_trace_create?
Thanks,
Nancy
June 23, 2005 at 10:42 am
You can't change it on the fly once its running. What we do is set a stop point when we start it so that it will run for 24 hours without our interference and we name the file with the date of the start point. Then, when the next day arrives, the running process stops itself and we start the next process with a new file name.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 28, 2005 at 10:32 am
Are you doing it manually ?
You can set up a stored procedure and schedule a job that runs every day with new filename.
Ananth G
June 28, 2005 at 11:52 am
I'm currently setting up the stored procedures and scheduled jobs to handle it automated. My plan is to setup a stored procedure to define and start the trace to run until midnight (thanks for the idea) using a filename incorporating the date and time, and then schedule it to run at midnight and on server startup.
Thanks,
Nancy
June 29, 2005 at 8:09 am
Ensure that you have a few minutes gap between the stop of the current trace and the start of the new trace, so that all the data in the buffer gets written to the .trc file.
Otherwise, it will be carried to the new trace.
June 29, 2005 at 8:31 am
Forgive me for being thick, but this sounds counter-intuitive. I realize that there is no guarantee that the stop and start will happen at exactly the same time no matter how I schedule it, but if both traces happened to be running at the same time, wouldn't the data show up in both? And what is the issue if it only gets written to the new trace file, other than we can't see it until the next day?
I'm actually more concerned that the data will be lost as this is being done for a compliance issue, so I really ought to overlap the traces a minute or two.
What do you think?
-Nancy
June 29, 2005 at 8:53 am
You can specify the total time, for which your trace needs to run, which can very gaurantee that the stop and start will happen at exactly the same time, by specifying a value less than 24 hours but close to it.
Regarding data, based on the gap and the available resources at the time the trace is stopped, either it will be shown up in both or in only one .trc file.
Any how data is not last. No need to overlap. You can run exactly for 24 hours.
One job that is scheduled to run once daily, is enough.
June 29, 2005 at 4:08 pm
As long as you know that you may have some number of transactions that cross over... No, it's not a big deal at all.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
July 1, 2005 at 9:20 am
no problem you can proceed in your way
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply