Trace file is 14 GB! (21 hours data collected)

  • I ran a trace with the only Event RPC:Completed (6 columns)

    for 21 hours and the file size is 14 GB!

    Is it normal for the trace file?

    There is one .NET app that uses the database server.

    They also run some reports on that server.

    If I don't catch long running queries with this trace

    I won't even be able to add other Events.

    It's generating too big files.

  • Busy database? If you get a lot of transactions/sec, that's easily doable. I've had one that generated 500MB in half an hour, and that was a datawarehouse-type DB. If that's a heavily used OLTP system, I can believe it

    Do you really need to trace for 21 hours? I normally run traces for an hour at the most, otherwise there's just too much data to read through. Make sure you get a time of day that's representative, do two 1 hour traces at different times of the day if necessary.

    What columns did you take?

    Textdata, spid, duration, cpu, reads, writes?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • On the subject of trace files, I have a situation where the application running against a database periodically experiences severe systemic contention. We haven't had much luck figuring out what is going on by looking at system information when the contention is already occurring. What we would like to see is the 15 minutes or half an hour of SQL activity leading up to the contention event. In a perfect world then, I would like to run the SQL Profiler continuously where older statements are automatically discarded such as in the Windows Event Viewer where you can set it to overwriter older events as needed. One thought I had was to configure the profiler to write out 500 mb trace files with automatic rollover. Older files could then be periodically deleted so that the target directory does not get filled up. What seems to be happening, I am told, is that when this is done, the Profiler keeps all cumulative information on the local drive even though the final trace files are written elsewhere.

    Any thoughts on how to run continuous monitoring such that older data is automatically discarded leaving only the most recent one or two hours worth of data?

  • is it possible to stop and start the trace using a script (BAT, VBScript, VB .NET) so that the cumulative data is dumped at when the the trace is restarted?

  • It looks like it's a heavily used OLTP system.

    But what exactly is generating so much SQL we don't know.

    They suspect it's some custom services+.NET application.

    But I need to find out exactly.

    Their timeouts occur normally from 6:30 AM till 3:00 PM

    but it's not like they occur every half an hour. You can have one error

    at 7:40 AM and another error at 11:10 AM

    so it's hard to catch it if you run a trace for half an hour.

    Trace file has 7,500,000 records.

    I am loading it now into SQL table TraceResults from Gail's example.

    The problem is that I am working for this client remotely.

    I can't sit and watch the system during the whole day.

    I can only spend maybe half an hour in the morning and

    half an hour in the afternoon>

    I can spend more time on this after 10 PM but then all errors have occured already and it's too late to start any traces.

  • Gail! Help!

    INSERT INTO TraceResults

    (TextData, Duration, Reads, Writes, CPU, StartTime)

    SELECT --top 100000

    TextData, Duration/1000, Reads, Writes, CPU, StartTime

    FROM fn_trace_gettable('E:\20090505-203736.trc',1)

    has been running for 8.5 min and

    set transaction isolation level

    read uncommitted

    begin transaction

    select count(*) from TraceResults

    commit transaction

    returns "0" records.

    What is it doing?

    Is it buffering?

    I suspect it's gonna run indefinately

    but why it does not return any errors?

    I had to change Data Type for TextData column (varchar(max))

    Otherwise it wasn't writing anything.

    I tested top 100,000 records with varchar(max)

    and it worked.

    I stopped that INSERT and did a test with "SELECT TOP 200,000..."

    It worked. It took 1 min 52 sec.

    My COUNT started to grow only on 39th second after INSERT was started.

    So it is buffering?

    Maybe I just have to be patient and leave it running overnight?

  • I found this topic on MSDN:

    http://msdn.microsoft.com/en-us/library/ms189034.aspx

    (but it's about SQL Server 2008 so it might not work in 2005)

    It says:

    ...meets or exceeds the specified maximum file size, you are notified and given the opportunity either to increase the maximum file size or specify a new file.

    That would be very handy.

    The only problem I am running unattended trace so there is nobody to respond to

    that "would you like to increase or create another trace file?" option.

    It would be nice if you could configure the trace to automatically generate a new

    file if it exceeds 2GB.

    Actually the article says it can be done using File Rollover:

    Enabling File Rollover

    --------------------

    The file rollover option causes SQL Server to close the current file and create a new file when the maximum file size is reached. The new file has the same name as the previous file, but an integer is appended to the name to indicate its sequence. For example, if the original trace file is named filename_1.trc, the next trace file is filename_2.trc, and so on. If the name assigned to a new rollover file is already used by an existing file, the existing file is overwritten unless it is read only. The file rollover option is enabled by default when you are saving trace data to a file.

    I will test it and if it works I will set the max file size to 2GB

    and it will do file rollover.

  • Riga,

    The script below is set to create rollover files of 250 mb and to retain the five most recent iterations.

    /****************************************************/

    /* Created by: SQL Server Profiler 2005 */

    /* Date: 05/12/2009 07:01:46 PM */

    /****************************************************/

    /*

    This utility script runs a SQL trace for certain events (errors, Tran SQL statement start and stop).

    The trace is set to write 250 mb files with rollover. The five most recent rollover files are

    retained.*/

    /* RECORD THE TRACE ID WHEN THIS SCRIPT IS RUN. THIS TRACE ID IS NEEDED SO THAT THE

    TRACE CAN BE STOPPED. SEE BOTTOM OF SCRIPT FOR SQL TO STOP THE TRACE. */

    -- Create a Queue

    declare @rc int,@TraceID int,@maxfilesize bigint,@on bit,@intfilter int,@bigintfilter bigint,

    @OutputFileNameRoot nvarchar(1000),@DBName varchar(250),

    @OutputFileName nvarchar(1000)

    /* set to 250 (mb) - the script is set to retain the 5 most recent rollovers of the trace file. */

    set @maxfilesize = 250

    /* The person running this script must be able to create/update/delete files in the output location. */

    SET @OutputFileNameRoot='\\itds25\TraceFiles\traceoutput'

    /* Set this to the database of interest. */

    SET @DBName='your database name here'

    SET @OutputFileName=@OutputFileNameRoot

    exec @rc = sp_trace_create @TraceID output, 2, @OutputFileName, @maxfilesize, NULL ,5

    if (@rc != 0) goto error

    -- Set the events

    set @on = 1

    exec sp_trace_setevent @TraceID, 22, 31, @on

    exec sp_trace_setevent @TraceID, 22, 1, @on

    exec sp_trace_setevent @TraceID, 22, 9, @on

    exec sp_trace_setevent @TraceID, 22, 6, @on

    exec sp_trace_setevent @TraceID, 22, 10, @on

    exec sp_trace_setevent @TraceID, 22, 14, @on

    exec sp_trace_setevent @TraceID, 22, 3, @on

    exec sp_trace_setevent @TraceID, 22, 11, @on

    exec sp_trace_setevent @TraceID, 22, 35, @on

    exec sp_trace_setevent @TraceID, 22, 12, @on

    exec sp_trace_setevent @TraceID, 21, 31, @on

    exec sp_trace_setevent @TraceID, 21, 1, @on

    exec sp_trace_setevent @TraceID, 21, 9, @on

    exec sp_trace_setevent @TraceID, 21, 2, @on

    exec sp_trace_setevent @TraceID, 21, 6, @on

    exec sp_trace_setevent @TraceID, 21, 10, @on

    exec sp_trace_setevent @TraceID, 21, 14, @on

    exec sp_trace_setevent @TraceID, 21, 3, @on

    exec sp_trace_setevent @TraceID, 21, 11, @on

    exec sp_trace_setevent @TraceID, 21, 35, @on

    exec sp_trace_setevent @TraceID, 21, 12, @on

    exec sp_trace_setevent @TraceID, 33, 31, @on

    exec sp_trace_setevent @TraceID, 33, 1, @on

    exec sp_trace_setevent @TraceID, 33, 9, @on

    exec sp_trace_setevent @TraceID, 33, 6, @on

    exec sp_trace_setevent @TraceID, 33, 10, @on

    exec sp_trace_setevent @TraceID, 33, 14, @on

    exec sp_trace_setevent @TraceID, 33, 3, @on

    exec sp_trace_setevent @TraceID, 33, 11, @on

    exec sp_trace_setevent @TraceID, 33, 35, @on

    exec sp_trace_setevent @TraceID, 33, 12, @on

    exec sp_trace_setevent @TraceID, 10, 15, @on

    exec sp_trace_setevent @TraceID, 10, 31, @on

    exec sp_trace_setevent @TraceID, 10, 16, @on

    exec sp_trace_setevent @TraceID, 10, 1, @on

    exec sp_trace_setevent @TraceID, 10, 9, @on

    exec sp_trace_setevent @TraceID, 10, 17, @on

    exec sp_trace_setevent @TraceID, 10, 2, @on

    exec sp_trace_setevent @TraceID, 10, 10, @on

    exec sp_trace_setevent @TraceID, 10, 18, @on

    exec sp_trace_setevent @TraceID, 10, 3, @on

    exec sp_trace_setevent @TraceID, 10, 11, @on

    exec sp_trace_setevent @TraceID, 10, 35, @on

    exec sp_trace_setevent @TraceID, 10, 12, @on

    exec sp_trace_setevent @TraceID, 10, 13, @on

    exec sp_trace_setevent @TraceID, 10, 6, @on

    exec sp_trace_setevent @TraceID, 10, 14, @on

    exec sp_trace_setevent @TraceID, 11, 1, @on

    exec sp_trace_setevent @TraceID, 11, 9, @on

    exec sp_trace_setevent @TraceID, 11, 2, @on

    exec sp_trace_setevent @TraceID, 11, 6, @on

    exec sp_trace_setevent @TraceID, 11, 10, @on

    exec sp_trace_setevent @TraceID, 11, 14, @on

    exec sp_trace_setevent @TraceID, 11, 3, @on

    exec sp_trace_setevent @TraceID, 11, 11, @on

    exec sp_trace_setevent @TraceID, 11, 35, @on

    exec sp_trace_setevent @TraceID, 11, 12, @on

    exec sp_trace_setevent @TraceID, 12, 15, @on

    exec sp_trace_setevent @TraceID, 12, 31, @on

    exec sp_trace_setevent @TraceID, 12, 16, @on

    exec sp_trace_setevent @TraceID, 12, 1, @on

    exec sp_trace_setevent @TraceID, 12, 9, @on

    exec sp_trace_setevent @TraceID, 12, 17, @on

    exec sp_trace_setevent @TraceID, 12, 6, @on

    exec sp_trace_setevent @TraceID, 12, 10, @on

    exec sp_trace_setevent @TraceID, 12, 14, @on

    exec sp_trace_setevent @TraceID, 12, 18, @on

    exec sp_trace_setevent @TraceID, 12, 3, @on

    exec sp_trace_setevent @TraceID, 12, 11, @on

    exec sp_trace_setevent @TraceID, 12, 35, @on

    exec sp_trace_setevent @TraceID, 12, 12, @on

    exec sp_trace_setevent @TraceID, 12, 13, @on

    exec sp_trace_setevent @TraceID, 13, 12, @on

    exec sp_trace_setevent @TraceID, 13, 1, @on

    exec sp_trace_setevent @TraceID, 13, 9, @on

    exec sp_trace_setevent @TraceID, 13, 6, @on

    exec sp_trace_setevent @TraceID, 13, 10, @on

    exec sp_trace_setevent @TraceID, 13, 14, @on

    exec sp_trace_setevent @TraceID, 13, 3, @on

    exec sp_trace_setevent @TraceID, 13, 11, @on

    exec sp_trace_setevent @TraceID, 13, 35, @on

    -- Set the Filters

    /* limit trace to database of interest as specified at top of script. */

    SELECT @intfilter=dbid FROM sysdatabases WHERE [Name]=@DBName

    exec sp_trace_setfilter @TraceID, 3, 1, 0, @intfilter

    /* exclude SQL profiler application. */

    exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler'

    -- Set the trace status to start

    exec sp_trace_setstatus @TraceID, 1

    -- display trace id for future references

    select TraceID=@TraceID

    goto finish

    error:

    select ErrorCode=@rc

    finish:

    GO

    RETURN

    /* The SQL statements below are for stopping the trace. It is very important that when the trace is

    started that the trace ID is recorded. To stop the trace, highlight all of the below

    statements and execute them.

    */

    DECLARE @TraceID INT

    /* note the trace id here and save the script 2*/

    SET @TraceID=2

    /* stop trace. */

    EXEC sp_trace_setstatus @TraceID, 0

    /* clear trace from memory. */

    EXEC sp_trace_setstatus @TraceID, 2

  • I would simply set a few filters to restrict what is logged. Like > 0 sec duration (or even higher like >10ms). Maybe limit it to the databaseid of concern, reads > someamount, etc.

    If you are looking for blocking issues there is a blocked process report that can be configured. See here in BOL: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/e8acb408-938d-4b36-81dd-04f087410cc5.htm

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I found a simpler solution.

    I just loaded trace file into multiple tables:

    TraceResults_1

    TraceResults_2

    TraceResults_3

    TraceResults_4

    TraceResults_5

    etc.

    broken by StartTime values.

    INSERT INTO TraceResults_1

    (TextData, Duration, Reads, Writes, CPU, StartTime)

    SELECT TextData, Duration/1000, Reads, Writes, CPU, StartTime

    FROM fn_trace_gettable('E:\20090505-203736.trc',1)

    WHERE StartTime >= '2009-05-05 20:37:36'

    AND

    StartTime < '2009-05-05 22:00:00'

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply