November 5, 2007 at 9:52 pm
Comments posted to this topic are about the item Tip for determining I/O Heavy Queries
November 6, 2007 at 3:27 pm
An interesting take on an old problem.
I personally turn profiler on for a while, capturing "statement completed" events for the database in question. I use a filter of reads >= 20 and duration > 1000 in order to cut down on the amount of data captured.
I find it irritating that you cannot (at least in 2000) specify NOT NULL in the filters.
Anyway, I usually always find some evil code that takes over a minute to run, and causes hundreds of thousands of disk reads.
It is quick and dirty, but it has not failed me yet.
And for those who are hyper-paranoid about the profiler causing performance problems of its own, I think that today's computers seem to be powerful enough to handle profiler for short bursts. I would rather solve the code-related problem quickly rather than worry too much about profiler induced problems.
November 8, 2007 at 8:11 am
We have applied the suggestions that Ryan highlighted from Michael & Solomon articles to our Siebel Database.
When we look at the output from Profiler (with the trigger activated) we do not see any SQL statements.
Has anyone else tried this on a Siebel database?
What could be causing this lack of content in the TextData column?
thanks
Gary
November 8, 2007 at 8:27 am
Gary G (11/8/2007)
What could be causing this lack of content in the TextData column?
Hello Gary. I don't think I mentioned this in my article, but one requirement of the trigger to provide information is that the Trace Table (and hence trigger) needs to be on the same database server (or instance) since the DBCC can only see SPIDs that are local, though there is no requirement as to which actual database it resides in. I am not sure if this is what you already tried, but it's my initial guess without knowing what exactly you tried. If you are still having the problem, did you use the same events that the article suggested? Certain events will never be able to show DBCC info.
Take care,
Solomon...
P.S. Thank you, Ryan, for the kudos on my article 😀
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 8, 2007 at 4:05 pm
I have implemented Michael's article successfully. I have a sql trace running and capturing data into table IndexCapture.
But when I try to implement the trigger provided by Solomon on table IndexCapture ( I changed the table name in the trigger), the trigger gets created successfully but immediately after that an error pops up in SQL Profiler 'Failed to save trace data to table' and the trace is stopped.
Any ideas, why this might be happening? I am working on SQL Servr 2005 SP2, not SQL Server 2000.
The table and trigger are in same the database and in the same SQL Server instance.
Thanks!
November 8, 2007 at 5:58 pm
KB (11/8/2007)
But when I try to implement the trigger provided by Solomon on table IndexCapture ( I changed the table name in the trigger), the trigger gets created successfully but immediately after that an error pops up in SQL Profiler 'Failed to save trace data to table' and the trace is stopped.
Hello KB. I think I found the problem. In SQL Server 2005, the "EventInfo" field returned by the DBCC INPUTBUFFER command has increased in size from 255 to 4000. So, the CREATE TABLE #DBCCInfo line should look as follows:
CREATE TABLE #DBCCInfo (EventType NVARCHAR(30), Parameters INT, EventInfo NVARCHAR(4000))
After adding the trigger, if the trace stops for any reason, here is a way to test the trigger:
INSERT INTO dbo.TraceTable (SPID) VALUES (@@SPID)
I tried this and got an error stating that data would be truncated. That led me to look at the temp table definition and then to Books Online to see the values that DBCC INPUTBUFFER was reporting.
ALSO, it appears that I was a bit hasty in posting this the first time so I am editing now to add this paragraph. It seems that even after increasing the EventInfo field to 4000 it still gave the Failed to save trace data to table. error and stops the trace. So after more investigating I was able to work around the problem. For some reason, adding the trigger to the table while the trace is running causes the problem. I am not exactly sure why but that is definitely the problem. So, the trick to fix it is to follow these steps:
1) Start the trace (this creates the table)
2) Pause the trace
3) Run the script to create the trigger
4) UN-Pause the trace
I hope this helps.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 9, 2007 at 2:29 am
Hi Solomon,
Thanks for that. Once we ran the profiler & trigger on the same box as the db it worked & started to populate the TextData column.
We are monitoring the Scan:Started event & trying to capture the SQL Code that is calling the Index. The trigger popluates the TextData column but only with cursor operations eg sp_cursor, sp_cursorprepexec, sp_cursorunprepare, etc.
Do you know anyway to actually capture the SQL being executed?
The Profiler details i'm using are:-
Columns
EventClass
EventSubClass
ObjectID
IndexID
ApplicationName
DatabaseID
TextData
SPID
StartTime
Filters
Application not like
SQL Profiler
MS SQLEM
SQLAgent - Alert Engine
SQLAgent - Job Manager
SQL Query Analyzer
Spotlight on SQL Server
SQLDMO_1
MS SQLEM - Data Tools
SQLAgent - TSQL
DatabaseID = 7 (our Siebel db)
LoginName not like sa
ObjectID >= 100
This profiler is capturing about 120 events per second, with only 1 user on the Siebel Application accessing the database.
thanks
Gary
November 9, 2007 at 9:34 am
Gary G (11/9/2007)
Do you know anyway to actually capture the SQL being executed?
Hey again, Gary. Now, some events simply do not offer that much info, unfortunately. However, I was able to modify the UPDATE statement to include the only other insight into the executed SQL that I am aware of (check out the sys.dm_exec_sql_text(sql_handle) dynamic management view). Just replace the UPDATE statement you have completely with this one (differences are the SET and the 3 lines between CROSS JOIN and WHERE):
UPDATE tt
SET tt.TextData = COALESCE(info.EventInfo,'') + ' --SQL-> ' + COALESCE(sqltext.text, '')
FROM dbo.TraceTable tt
INNER JOIN inserted ins
ON ins.RowNumber = tt.RowNumber
CROSS JOIN #DBCCInfo info
INNER JOIN sys.sysprocesses sp -- this table has the SPID and sql_handle
ON sp.spid = tt.SPID -- match SPID between processes and TraceTable
CROSS APPLY sys.dm_exec_sql_text(sp.sql_handle) sqltext -- this view gets the SQL
WHERE tt.TextData IS NULL
Now, keep in mind that often enough the output from DBCC INPUTBUFFER is the same as the "text" field returned by dm_exec_sql_text(). But if it is ever different then this will certainly show it as it will always display both.
Also, keep in mind that the above modification to the original UPDATE only works in SQL Server 2005. If you want it for SQL Server 2000, then that would take a little more work to re-engineer it to use the ::fn_get_sql() function since there is no CROSS APPLY in SQL Server 2000.
I hope this helps.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 9, 2007 at 11:41 am
Hi Solomon,
I had already changed the EventInfo field to 4000, sorry I didn't mention that in my reply.
The workaround you gave is working great, thanks a lot!!
March 27, 2009 at 11:21 am
Good article, Solomon. Will try to implement it on our siebel server. Siebel with SQL server ! ! ! is like a deadly combination. Performance had been an issue off late. (by the way, this is a new project attached to me on new company acquisition)
SQL DBA.
March 27, 2009 at 11:40 am
SanjayAttray (3/27/2009)
Good article, Solomon. Will try to implement it on our siebel server.
Hello Sanjay and thanks. However, the author of this article is Ryan Cooper so he should be getting your compliment :-).
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
April 7, 2009 at 10:34 am
Hi all,
How do the results gleaned from this setup compare to what's available in the SQL2005 "missing index" dynamic management views?
April 7, 2009 at 1:27 pm
They would not directly relate at all. This setup uses the size of the index in its calculation to determine the impact of reading it from disk. So, if there is no index, it obviously wouldn't be in this.
However, perhaps, by adding an index recommended by that DMV and reviewed for yourself, you should see a reduction in i/o cost for other indexes related to that table using this method, and hopefully a reduction overall for that table.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply