February 7, 2013 at 11:20 pm
Hi Guys,
I am actually looking out for some DMV or script based on which I can find database usage details such as (#read & write times over the period) just like we have for index usage stats. We are actually not really sure, which db's have been queried by user the most, and the one's which are just sitting there for no reason @ all.
Pls. do respond 🙂
Thanks & Regards,
Faisal
February 8, 2013 at 12:49 am
I'll have a try. How about that :
SELECT TOP 10 [Total Reads] = SUM(total_logical_reads) ,[Execution count] = SUM(qs.execution_count) ,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt GROUP BY DB_NAME(qt.dbid) ORDER BY [Total Reads] DESC;
SELECT TOP 10 [Total Writes] = SUM(total_logical_writes) ,[Execution count] = SUM(qs.execution_count) ,DatabaseName = DB_NAME(qt.dbid) FROM
sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt GROUP BY DB_NAME(qt.dbid) ORDER BY [Total Writes] DESC;
Anybody could explain me, why I got a scroll bar at bottom of SQL Code?
Greetz
Query Shepherd
February 8, 2013 at 1:55 am
I want page reads as well as io reads & writes i.e (logical & physical reads/writes). We actually show these results to our sysadmin guys, as they want usage details per db.
Regards,
Faisal
February 8, 2013 at 7:01 am
faisalfarouqi (2/8/2013)
I want page reads as well as io reads & writes i.e (logical & physical reads/writes). We actually show these results to our sysadmin guys, as they want usage details per db.Regards,
Faisal
You'll have to use perfmon to get disk I/O. It's not in the DMVs. Just ran into this yesterday.
@Pizza, because that's how the site works. If you want to unleash the full beauty of your code on all of us, uninhibited by scroll bars, then use the prettifier with the IFCodes style and post those results 😉
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 8, 2013 at 11:43 am
You can get something close to what you want from the virtual file stats - if it's just relative IOPs and throughput for each database.
This script takes a sample over a ten second period, and calculates the average reads/writes per sec, and the average bytes read and written per sec.
You could easily modify this to capture over a longer period.
use master;
set nocount on;
declare @before table (
capture_time datetime,
database_name nvarchar(255),
logical_file_name nvarchar(255),
bytes_read bigint,
bytes_written bigint,
num_reads bigint,
num_writes bigint
)
declare @after table (
capture_time datetime,
database_name nvarchar(255),
logical_file_name nvarchar(255),
bytes_read bigint,
bytes_written bigint,
num_reads bigint,
num_writes bigint
)
insert @before
select getdate(), d.name, mf.name, num_of_bytes_read, num_of_bytes_written, num_of_reads, num_of_writes
from sys.dm_io_virtual_file_stats(null, null) vfs
join sys.master_files mf on vfs.database_id = mf.database_id and vfs.file_id = mf.file_id
join sys.databases d on mf.database_id = d.database_id
waitfor delay '00:00:10'
insert @after
select getdate(), d.name, mf.name, num_of_bytes_read, num_of_bytes_written, num_of_reads, num_of_writes
from sys.dm_io_virtual_file_stats(null, null) vfs
join sys.master_files mf on vfs.database_id = mf.database_id and vfs.file_id = mf.file_id
join sys.databases d on mf.database_id = d.database_id
select b.database_name, b.logical_file_name,
1000 * (a.bytes_read - b.bytes_read) / datediff(ms, b.capture_time, a.capture_time) bytes_read_per_sec,
1000 * (a.bytes_written - b.bytes_written) / datediff(ms, b.capture_time, a.capture_time) bytes_written_per_sec,
1000 * (a.num_reads - b.num_reads) / datediff(ms, b.capture_time, a.capture_time) num_reads_per_sec,
1000 * (a.num_writes - b.num_writes) / datediff(ms, b.capture_time, a.capture_time) num_writes_per_sec
from @before b
join @after a on b.database_name = a.database_name and b.logical_file_name = a.logical_file_name
order by 1, 2
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply