June 4, 2015 at 3:48 pm
I use this code in a utility procedure (for performance testing) but it is really slow.
For example, a session with three events is taking 5 seconds to complete this query:
DECLARE @xml xml=
(
SELECT CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = @name
);
with data as
(
select
convert(varchar(128),convert(varbinary(128),'0x'+n.value('(action[@name="context_info"]/value/text())[1]','varchar(128)'),1)) context
, n.value('(data[@name="duration"]/value/text())[1]','int')/1000.0 duration
, n.value('(data[@name="cpu_time"]/value/text())[1]','bigint')/1000.0 cpu_time
, n.value('(data[@name="physical_reads"]/value/text())[1]','bigint') physical_reads
, n.value('(data[@name="logical_reads"]/value/text())[1]','bigint') logical_reads
, n.value('(data[@name="writes"]/value/text())[1]','bigint') writes
, n.value('(data[@name="row_count"]/value/text())[1]','bigint') row_count
, n.value('(data[@name="statement"]/value/text())[1]','nvarchar(max)') statement
from @xml.nodes('RingBufferTarget/event[@name="sql_statement_completed"]') x(n)
)
select
context
, statement
, avg(duration) as [avg_duration (ms)]
, min(duration) as [min_duration (ms)]
, max(duration) as [max_duration (ms)]
, avg(cpu_time) as [avg_cpu_time (ms)]
, min(cpu_time) as [min_cpu_time (ms)]
, max(cpu_time) as [max_cpu_time (ms)]
, avg(physical_reads) as avg_physical_reads
, min(physical_reads) as min_physical_reads
, max(physical_reads) as max_physical_reads
, avg(logical_reads) as avg_logical_reads
, min(logical_reads) as min_logical_reads
, max(logical_reads) as max_logical_reads
, avg(writes) as avg_writes
, min(writes) as min_writes
, max(writes) as max_writes
, avg(row_count) as avg_row_count
, min(row_count) as min_row_count
, max(row_count) as max_row_count
, count(*) as sample_count
from data
group by context, statement
order by [avg_duration (ms)];
So, I was wondering (considering the buffer is usually only holding a few hundred events)
Thanks
MM
select geometry::STGeomFromWKB(0x0106000000020000000103000000010000000B0000001000000000000840000000000000003DD8CCCCCCCCCC0840000000000000003DD8CCCCCCCCCC08408014AE47E17AFC3F040000000000104000CDCCCCCCCCEC3F9C999999999913408014AE47E17AFC3F9C99999999991340000000000000003D0000000000001440000000000000003D000000000000144000000000000000400400000000001040000000000000F03F100000000000084000000000000000401000000000000840000000000000003D0103000000010000000B000000000000000000143D000000000000003D009E99999999B93F000000000000003D009E99999999B93F8014AE47E17AFC3F400000000000F03F00CDCCCCCCCCEC3FA06666666666FE3F8014AE47E17AFC3FA06666666666FE3F000000000000003D1800000000000040000000000000003D18000000000000400000000000000040400000000000F03F000000000000F03F000000000000143D0000000000000040000000000000143D000000000000003D, 0);
June 4, 2015 at 5:34 pm
I generally suggest outputting a file and querying that. It's still not fast, but you're going to generally be better off, especially in terms of the amount of information stored. Yeah, the buffer is just that, recent events, and it gets cleared regularly.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 4, 2015 at 5:51 pm
Grant beat me to it.
I only just started using extended events and have a lot to learn but I have found that querying the file directly has been pretty quick.
Below is an example of how I do it. If I fill this with a couple hundred events it still runs under a second if I comment out the event_data_xml (though, in this example, I'm not calculating averages, min, max, etc...) Perhaps this example will give you something to monkey around with.
USE tempdb
GO
SET NOCOUNT ON;
-- (1) create a getnums function
IF OBJECT_ID('dbo.GetNums') IS NOT NULL DROP FUNCTION dbo.GetNums;
GO
CREATE FUNCTION dbo.GetNums(@rows int)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH
L1(N) AS (SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1)) t(N)),
L3(N) AS (SELECT 1 FROM L1 a, L1 b, L1 c),
Nums AS (SELECT N = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM L3 a, L3 b, L3 c)
SELECT TOP(@rows) N
FROM Nums
GO
-- (2) Create and start an EE session
CREATE EVENT SESSION [queryperf] ON SERVER
ADD EVENT sqlserver.sql_statement_completed
ADD TARGET package0.event_file(SET filename=N'S:\queryperf.xel',max_file_size=(20),max_rollover_files=(10))
WITH
(
MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=120 SECONDS, MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=ON
);
ALTER EVENT SESSION [queryperf] ON SERVER STATE = START;
GO
-- (3) Fill it up with events
IF OBJECT_ID('tempdb..#x100K') IS NOT NULL DROP TABLE #x100K;
IF OBJECT_ID('tempdb..#x1M') IS NOT NULL DROP TABLE #x1M;
IF OBJECT_ID('tempdb..#x10M') IS NOT NULL DROP TABLE #x10M;
SELECT x = newid() INTO #x100K FROM dbo.GetNums(1000);
SELECT x = newid() INTO #x1M FROM dbo.GetNums(1000000);
SELECT x = newid() INTO #x10M FROM dbo.GetNums(10000000);
GO 20
-- (4) Drop the session
DROP EVENT SESSION [queryperf] ON SERVER;
-- (5) Get results
SELECT q.duration,q.cpu_time,q.physical_reads,q.logical_reads,q.writes,/*event_data_XML,*/statement,timestamp
FROM
(
SELECT
duration=e.event_data_XML.value('(//data[@name="duration"]/value/text())[1]','int'),
cpu_time=e.event_data_XML.value('(//data[@name="cpu_time"]/value/text())[1]','int'),
physical_reads=e.event_data_XML.value('(//data[@name="physical_reads"]/value/text())[1]','int'),
logical_reads=e.event_data_XML.value('(//data[@name="logical_reads"]/value/text())[1]','int'),
writes=e.event_data_XML.value('(//data[@name="writes"]/value/text())[1]','int'),
statement=e.event_data_XML.value('(//data[@name="statement"]/value/text())[1]','nvarchar(max)'),
TIMESTAMP=e.event_data_XML.value('(//@timestamp)[1]','datetime2(7)'), *
FROM
(
SELECT CAST(event_data AS XML) AS event_data_XML
FROM sys.fn_xe_file_target_read_file('S:\queryperf*.xel', NULL, NULL, NULL)
)e
)q
WHERE q.[statement] LIKE 'select x%' --Filters out all the detritus that we're not interested in!
ORDER BY q.[timestamp] ASC;
-- Itzik Ben-Gan 2001
June 4, 2015 at 6:02 pm
The more I see about XE, the less I like it.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 5, 2015 at 2:51 am
Jeff Moden (6/4/2015)
The more I see about XE, the less I like it.
Yes, that's why I'm trying to improve my experience 😀
Grant Fritchey (6/4/2015)
I generally suggest outputting a file and querying that. It's still not fast, but you're going to generally be better off, especially in terms of the amount of information stored. Yeah, the buffer is just that, recent events, and it gets cleared regularly.
Thanks Grant, I was using the ring buffer simply because I am using a highly targeted session and I size the ring buffer to match the test I am running, so lost events isn't an issue - they shouldn't happen, unless I am mistaken. That made me think, if I target the Ring Buffer, is it "shared" so I could still lose events or do I get "my own"?
I will try using a file target to see if it is better.
Alan, thanks - really similar to the code I am using, except for the file target, so I'll definitely try yours and compare performance.
Does anyone have any good links for benefit comparison of Ext. Event and Trace?
MM
select geometry::STGeomFromWKB(0x0106000000020000000103000000010000000B0000001000000000000840000000000000003DD8CCCCCCCCCC0840000000000000003DD8CCCCCCCCCC08408014AE47E17AFC3F040000000000104000CDCCCCCCCCEC3F9C999999999913408014AE47E17AFC3F9C99999999991340000000000000003D0000000000001440000000000000003D000000000000144000000000000000400400000000001040000000000000F03F100000000000084000000000000000401000000000000840000000000000003D0103000000010000000B000000000000000000143D000000000000003D009E99999999B93F000000000000003D009E99999999B93F8014AE47E17AFC3F400000000000F03F00CDCCCCCCCCEC3FA06666666666FE3F8014AE47E17AFC3FA06666666666FE3F000000000000003D1800000000000040000000000000003D18000000000000400000000000000040400000000000F03F000000000000F03F000000000000143D0000000000000040000000000000143D000000000000003D, 0);
June 5, 2015 at 1:47 pm
Jeff Moden (6/4/2015)
The more I see about XE, the less I like it.
Ha! The more I use it the more in love with it I get. What it can do so far outstrips what you can do with trace it's insane. But, like trace, you have to hop through a few flaming hoops.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 5, 2015 at 1:51 pm
mister.magoo (6/5/2015)
Jeff Moden (6/4/2015)
The more I see about XE, the less I like it.Yes, that's why I'm trying to improve my experience 😀
Grant Fritchey (6/4/2015)
I generally suggest outputting a file and querying that. It's still not fast, but you're going to generally be better off, especially in terms of the amount of information stored. Yeah, the buffer is just that, recent events, and it gets cleared regularly.Thanks Grant, I was using the ring buffer simply because I am using a highly targeted session and I size the ring buffer to match the test I am running, so lost events isn't an issue - they shouldn't happen, unless I am mistaken. That made me think, if I target the Ring Buffer, is it "shared" so I could still lose events or do I get "my own"?
I will try using a file target to see if it is better.
Alan, thanks - really similar to the code I am using, except for the file target, so I'll definitely try yours and compare performance.
Does anyone have any good links for benefit comparison of Ext. Event and Trace?
Unless you specifically tell it to keep all events (bad idea by the way), it limits the amount of memory it consumes and will absolutely dump events. That's good. it's by design. However, that's why you output to file. You'll capture all (or at least most) events that way.
For comparisons... I don't have a specific article in mind, but read Jonathan Kehayias stuff. He outlines all the cool things, improved filtering, better memory management, radically enhanced and extended events, causality tracking (one of my favorites), etc., that are available with extended events.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply