February 24, 2020 at 8:41 pm
I'm working on trying to gather performance monitoring statistics for all my servers and generating a report based off the data so we can go back to any time and look at how the server was/is performing.
For CPU I'm mainly interested in CPU Queue Length and % User processor time and % Total processor time, but I'm not able to find any queries for pulling this kind of data.
I found the below query, which is good for telling me what database is using the most CPU but it doesn't really tell me how the server overall is fairing. I was wondering if any of you run similar queries or if you have a suggestion of how to pull the data I want and store it in a table. Thanks!
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
GO
;WITH DB_CPU_Stats
AS (
SELECT
F_DB.DatabaseID
, ISNULL( DB_NAME(F_DB.DatabaseID), CASE F_DB.DatabaseID
WHEN 32767
THEN 'Internal ResourceDB'
ELSE CONVERT(VARCHAR(255), F_DB.DatabaseID)
END
) AS [DatabaseName]
, SUM(qs.total_worker_time) AS [CPU_Time_Ms]
, SUM(qs.total_logical_reads) AS [Logical_Reads]
, SUM(qs.total_logical_writes) AS [Logical_Writes]
, SUM(qs.total_logical_reads + qs.total_logical_writes) AS [Logical_IO]
, SUM(qs.total_physical_reads) AS [Physical_Reads]
, SUM(qs.total_elapsed_time) AS [Duration_MicroSec]
, SUM(qs.total_clr_time) AS [CLR_Time_MicroSec]
, SUM(qs.total_rows) AS [Rows_Returned]
, SUM(qs.execution_count) AS [Execution_Count]
, COUNT(*) AS [Plan_Count]
FROM
sys.dm_exec_query_stats AS qs
CROSS APPLY (SELECT CONVERT(INT, value) AS [DatabaseID] FROM sys.dm_exec_plan_attributes(qs.plan_handle) WHERE attribute = N'dbid') AS F_DB
GROUP BY
F_DB.DatabaseID
)
SELECT
ROW_NUMBER() OVER (ORDER BY [CPU_Time_Ms] DESC) AS [Rank_CPU]
, DatabaseName
, [CPU_Time_Hr] = CONVERT(DECIMAL(15, 2), ([CPU_Time_Ms] / 1000.0) / 3600 )
, CAST([CPU_Time_Ms] * 1.0 / SUM(CASE [CPU_Time_Ms] WHEN 0 THEN 1 ELSE [CPU_Time_Ms] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [CPU_Percent]
, [Duration_Hr] = CONVERT(DECIMAL(15, 2), ([Duration_MicroSec] / 1000000.0) / 3600)
, CAST([Duration_MicroSec] * 1.0 / SUM(CASE [Duration_MicroSec] WHEN 0 THEN 1 ELSE [Duration_MicroSec] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Duration_Percent]
, [Logical_Reads]
, CAST([Logical_Reads] * 1.0 / SUM(CASE [Logical_Reads] WHEN 0 THEN 1 ELSE [Logical_Reads] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Logical_Reads_Percent]
, [Rows_Returned]
, CAST([Rows_Returned] * 1.0 / SUM(CASE [Rows_Returned] WHEN 0 THEN 1 ELSE [Rows_Returned] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Rows_Returned_Percent]
, [Reads_Per_Row_Returned] = [Logical_Reads] / (CASE [Rows_Returned] WHEN 0 THEN 1 ELSE [Rows_Returned] END)
, [Execution_Count]
, CAST([Execution_Count] * 1.0 / SUM(CASE [Execution_Count] WHEN 0 THEN 1 ELSE [Execution_Count] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Execution_Count_Percent]
, [Physical_Reads]
, CAST([Physical_Reads] * 1.0 / SUM(CASE [Physical_Reads] WHEN 0 THEN 1 ELSE [Physical_Reads] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Physcal_Reads_Percent]
, [Logical_Writes]
, CAST([Logical_Writes] * 1.0 / SUM(CASE [Logical_Writes] WHEN 0 THEN 1 ELSE [Logical_Writes] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Logical_Writes_Percent]
, [Logical_IO]
, CAST([Logical_IO] * 1.0 / SUM(CASE [Logical_IO] WHEN 0 THEN 1 ELSE [Logical_IO] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Logical_IO_Percent]
, [CLR_Time_MicroSec]
, CAST([CLR_Time_MicroSec] * 1.0 / SUM(CASE [CLR_Time_MicroSec] WHEN 0 THEN 1 ELSE [CLR_Time_MicroSec] END) OVER () * 100.0 AS DECIMAL(5, 2)) AS [CLR_Time_Percent]
, [CPU_Time_Ms] AS [CPU_Time_MS]
, [CPU_Time_Ms] / 1000 AS [CPU_Time_Sec]
, [Duration_MicroSec] AS [Duration_MicroSec]
, [Duration_MicroSec] / 1000000 [Duration_Sec]
INTO #CPUTempTable
FROM
DB_CPU_Stats
WHERE
DatabaseID > 4 -- system databases
AND DatabaseID <> 32767 -- ResourceDB
ORDER BY
[Rank_CPU]
OPTION (RECOMPILE)
--EXEC tempdb.dbo.sp_help @objname = N'#CPUTempTable';
SELECT * FROM #CPUTempTable
February 24, 2020 at 10:39 pm
You can get SQL Server specific information from sys.dm_os_performance_counters, but that doesn't include general information like the CPU queue length. I've found with some poking around you can get the SQL Server CPU usage from the ring buffers though. For example, this query will show the last 5 minutes:
DECLARE @ms_ticks bigint
SELECT @ms_ticks = ms_ticks
FROM sys.dm_os_sys_info
SELECT DATEADD(ms, ([timestamp] - @ms_ticks), GETDATE()) AS notification_time,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS CPUIdle,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS CPUSQL,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'int') AS PageFaults,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int') AS MemoryUsed
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 300000 --last 5 minutes?
February 25, 2020 at 12:42 pm
You simply can't get all the stuff through a T-SQL query. This is where WMI queries come into play and some kind of external language to pull that data together. Powershell is probably the way to go. You can pull the information you want and then turn around and write it to tables if that's what you're after.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 25, 2020 at 3:35 pm
You can get SQL Server specific information from sys.dm_os_performance_counters, but that doesn't include general information like the CPU queue length. I've found with some poking around you can get the SQL Server CPU usage from the ring buffers though. For example, this query will show the last 5 minutes:
DECLARE @ms_ticks bigint
SELECT @ms_ticks = ms_ticks
FROM sys.dm_os_sys_info
SELECT DATEADD(ms, ([timestamp] - @ms_ticks), GETDATE()) AS notification_time,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS CPUIdle,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS CPUSQL,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'int') AS PageFaults,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int') AS MemoryUsed
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 300000 --last 5 minutes?
This is pretty interesting and looks like something I might be able to work with..
Grant, I've found some powershell that does give me interesting results, unfortunately, my department only wants to use powershell as a last resort
This one in particular I liked a lot; I do wish I could figure out how to use this but some how break out the processes so I could see which processes are using the CPU while still giving me all the performance counters, but likely it would need to be broken out into more than one query to do that.
$ComputerName='computername'
for ($i=0;$i -lt 5;$i++) {
$counters = Get-Counter -Counter '\Processor(_Total)\% Processor Time',“\processor(_total)\% privileged time”,“\processor(_total)\% user time”,"\System\Processor Queue Length" -ComputerName $ComputerName
$properties = @(
@{n='Server';e={$_.Path.Split('\')[2]}},
@{n='Counter';e={$_.Path.Split('\')[-1]}},
@{n='Value';e={[math]::Round($_.CookedValue)}}
)
$counters.CounterSamples | Select-Object -Property $properties
sleep 1
}
February 26, 2020 at 4:42 pm
I've kind of settled on this... any thoughts?
DECLARE @ms_ticks bigint
SELECT @ms_ticks = ms_ticks
FROM sys.dm_os_sys_info
DECLARE @IdleTime FLOAT
DECLARE @UserModeTime FLOAT
DECLARE @KernalModeTime FLOAT
SELECT @IdleTime =SUM(Record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')) +
SUM(Record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint'))
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 86400000
SELECT @UserModeTime = SUM(Record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint'))/@IdleTime
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 86400000
SELECT @KernalModeTime =SUM(Record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')) / @IdleTime
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 86400000
SELECT SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS SystemName, GETDATE() AS Date,
CAST(AVG(record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')) AS VARCHAR(5))+'%' AS Average_CPU_Idle_Time,
CAST(MAX(record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int'))AS VARCHAR(5))+'%' AS Peak_SQLInstance_CPU_Usage,
CAST(AVG(record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int'))AS VARCHAR(5))+ '%'AS Avg_SQLInstance_CPU_Usage,
Cast(Cast(@UserModeTime*100 as decimal(18,2)) as varchar(5)) + '%' AS [NonIdle_UserModeTimePercent],
Cast(Cast(@KernalModeTime*100 as decimal(18,2)) as varchar(5))+ '%' AS [NonIdle_KernelModeTimePercent]
FROM (SELECT timestamp, convert(xml, record) AS record FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR') rb
WHERE [timestamp] > @ms_ticks - 86400000 --data for last 24hrs
February 27, 2020 at 2:54 pm
The only comment I have offhand is that I'm not sure if trying to summarize an entire day would be very useful.
For the kinds of OS level things you're trying to get, I have to admit I use a third party tool to collect, store, and analyze that data called PRTG, and only get the database related details through queries:
February 27, 2020 at 3:05 pm
The only comment I have offhand is that I'm not sure if trying to summarize an entire day would be very useful.
For the kinds of OS level things you're trying to get, I have to admit I use a third party tool to collect, store, and analyze that data called PRTG, and only get the database related details through queries:
Thanks for the response chris! I do believe my infrastructure team uses that same program for some monitoring, not sure exactly. The idea for getting a days worth is to kind of get an idea if there is too much or too little CPU, by getting the average and the peak cpu I feel like that should be an okay measure? Unless I have a fundamental misunderstanding here, which is totally possible.
Thank you
February 27, 2020 at 3:58 pm
That's fine if that meets your needs. With the database level information that I collect, I have a job that gathers it once an hour and stores it for analysis so I can compare the peak usage with what types of things were running at that time. I then have a small dashboard that displays the "simple stuff" like:
Page Life
Connections
Requests
Blocked Requests
Lock Waits
Network Waits
Page Waits
Active Trans
Log Flushes
Page Splits
Compiles
Recompiles
SQL CPU
which gets refreshed once a minute so I can see what's happening right now.
February 27, 2020 at 4:14 pm
I do have separate queries I use for things that are more memory related; I don't currently have anything built for monitoring networking though. I plan to use the last query i posted in combination with the first query I posted in a report to provide both instance level cpu data as well as database level cpu data.
March 20, 2020 at 2:56 am
Any reason your spending your precious time building a monitoring tool when there are so many good off the shelf options that have tried and true methods? You're going to spend a very large amount of time re-inventing something that a company has already built. If you calculate your time into the cost to the company to have you build something, you could have just purchased a product.
Having said that, if your company simply won't buy monitoring software, another option would be to capture perfmon and load it into a database using relog. Here is an example.
https://nebraskasql.blogspot.com/2014/05/handling-perfmon-logs-with-relog.html
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply