May 11, 2011 at 8:58 pm
Hi there,
We're using SQL Server 2008 SP1 and I think we're having I/O related issues. So I searched around a bit and found the DMV dm_io_virtual_file_stats. Great, sounds what I'm looking for so run the following query:
SELECT a.io_stall
, a.io_stall_read_ms
, a.io_stall_write_ms
, a.num_of_reads
, a.num_of_writes
, a.sample_ms
, a.num_of_bytes_read
, a.num_of_bytes_written
, a.io_stall_write_ms
, ((a.size_on_disk_bytes / 1024) / 1024.0) AS size_on_disk_mb
, DB_NAME(a.database_id) AS dbname
, b.name
, a.file_id
, db_file_type = CASE WHEN a.file_id = 2
THEN 'Log'
ELSE 'Data'
END
, UPPER(SUBSTRING(b.physical_name, 1, 2)) AS disk_location
FROM sys.dm_io_virtual_file_stats (NULL, NULL) AS a
INNER JOIN sys.master_files b ON
a.file_id = b.file_id AND
a.database_id = b.database_id
ORDER BY a.io_stall DESC
I get a lot of staff back but interestingly enough the io_stall figure for the temp_db data file is 51220497961, i.e. 14227 hours, sounds pretty big to me. Does that mean that is the total time that users have been waiting for the IO to respond in regards to the tempdb data file since the server was last restarted??? Okay so from the following link:
http://msdn.microsoft.com/en-us/library/ms190326.aspx
IO_Stall is "Total time, in milliseconds, that users waited for I/O to be completed on the file."
Now apparently these stats are refreshed when the SQL Server is restarted so I go and find when temp_db was last created (heard that it's created whenever SQL Server is restarted so it should give me my last restarted time). Now that gives me a date of 2011-03-13 21:00:45 but the problem with that is that there are only 1433 hours between that date and today. That earlier figure for io_stall was much more than the hours between the last restart and now.
I've been reading that the stats in that DMV are cumulative but that can't be the case with the figures I'm getting. Can anyone point me to a location where I can find out how best to use this DMV please? Or offer any insight of your own? All searches on this come up with just the uniform MSDN link I provided above which tells me what but not how.
Kind regards,
Chris
May 12, 2011 at 3:35 am
dmvs get cleared when the sql service restarts. Some events will clear dmvs and some dmvs manage themselves to minimise any contention; some dmvs will defer if resource is low.
Usually with dmvs it is best to snapshot their contents at regular intervals into a user table and then do the manipulations - this is always how I do this if I'm after consistent data or I'm troubleshooting.
storage issues can be a pain, I'd suggest you monitor the read and write latency with perfmon for each drive/lun. Watching this at 1 or 2 second granularity will tell you instantly if you have a problem. These counters measure how long it takes an io to complete, you'd ideally want to see almost 0 for reads and under 10 ms for writes, watch out for peaks vs the average. You'll probably find the perfmon counters better than the dmvs here.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 12, 2011 at 3:43 am
You don't say what sort of storage you have or how you think you have a problem - what appear to be io problems can actually manifest in many different ways - you should monitor page lookups and page life expectancy these give an indication as to how busy your buffer cache is. If your storage is fibre then you should investigate bandwidth on the hba's, I suffered with saturation on twin 2GB hbas - the effect on a busy application isn't pretty and is difficult to figure initially.
DAS and internal disks you should monitor disk idle time - if idle time is constantly almsot 0 then you don't have enough spindles ( this type of counter output is less useful on a san ) if you're using ethernet connected storage, if your connection is 1GB then be aware that's really slow ( narrow bandwidth ) hopefully any such storage would be using 10GBe.
If your san is using shared storage/spindles then welcome to a whole world of pain!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 12, 2011 at 2:39 pm
Hi there,
Thanks for the comments, appreciate your time. We're on a SAN but that's as far as I know. All the server stuff is housed centrally so as SQL Server administrators we don't see that side of the equation. I've asked for some specs from the hardware guys so we'll see what they say.
I'll set up a job to monitor the DMV in question to do comparison snapshots. It's an OLAP system so there are some fairly regular data moves happening which chew up a lot of resource. CPU wise we hardly ever spike at 100%, memory use is fairly constant at 80% - 90% so I'm looking at the I/O as a potential bottle neck.
Thanks again for your comments, I'll see what comes up!
Kind regards,
Chris
May 12, 2011 at 3:21 pm
DBAs really need to work with Storage guys, it'll save so much time and money. I'm lucky that I've been able to work with storage and as I got a couple of snia certifications in fibre channel architecture at least storage teams will talk to me - but it's still a battle at times. If you look on my blog and website I've got a number of posts about performance testing and benchmarking storage ( sans ) I made my point with in one instance by managing to swamp the san backplane with io and take out a mass of applications - that was proving that shared spindles with SQL Server are not the best idea! I'd tried the nice way without success. nearly 4 years on we have a dedicated san for our main application to ensure consistent performance ( we are truly 7 x 24 being a worldwide company ) after a couple of months of working with the san vendor to establish benchmarks and performance.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 15, 2011 at 5:19 pm
Hey there,
Thanks for your comments, yes it would be magic if the hardware folks and DBA's worked in the same spot, it would make information gathering so much quicker! Checked out your blog and had a read of "Storage the Final Frontier". I'm not a storage guru and I was able to understand it so thanks =)
I ran perfmon over the weekend and on the SAN drive that stores our MDF data files the avg disk sec write and avg disk sec read was averaging around 3 seconds during buys times (118 spikes over 1 second out of 15915 Perfmon polls). Not so flash so something to chat with the hardware people about. So looking at those IO_Stall metrics again I took a snapshot on the 13th and again one on the 16th. This period covers the weekend when we have our weekly load process. As you mentioned I did this so I could view the change.
Just looking at one of the data files (tempdb) the io_stall_ms value is 2874916765 (798 hours) between those two dates. Now we've 8 CPU's so I figure if that's actual time that SQL Server was waiting on IO resource perhaps it's all adding up in parallel, i.e. thread one is busy racking up IO Stall while thread two is as well etc.
Is my thinking right or is there something else I need to take into account? 798 hours for that io_stall metric over a 68 hour period just seems rather suspect.
Kind regards,
Chris
May 16, 2011 at 9:03 am
What I suggest is that you download and install the performance dashboard - this was produced for 2005 but with a couple of fixes it will run in sql 2008, I must put the modified code on my website for download - anyway I hunted for someobne else who'd done the same ! so if you downlaod the dashboard put the rdls somewhere locally - get the sql code and this puts some procs and functions into msdb, then you can run the whole thing from SSMS through custom reports. ( depending upon your install you may have to run reportviewer.exe, but if you have bids installed all will be fine.
This is the orginal ms code
http://sql-articles.com/blogs/utilizing-sql-2005-performance-dashboard-in-sql-server-2008/
and this is a modified sql script to make the procs work.
I haven't tested this .. I had to make a number of chnages to the original 2005 code to get it to run in 2005, let alone 2008 - but it is very very useful if you don't have any other monitoring and you're not 100% sur eon how to write your own dmv queries.
This will help you drill into your server performance - check the waits. I run my tempdb with multiple data files and trace flag 1118. ( there's some debate about whether 1118 trace flag does anything but I run with it. )
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 16, 2011 at 3:31 pm
I ignore io_stall and only look at the different read and write stalls. The io_stall number is rather out of whack with those two, and gives me no useful data. There are different expectations between read and write service times ('service time' being a common term for the stall values) - 20ms for reads, and 2ms for writes - so it's far more useful to view the two metrics separately.
-Eddie
Eddie Wuerch
MCM: SQL
May 16, 2011 at 3:37 pm
What's also important is to not look at this DMF for the whole time the server has been up - idle periods flatten the spikes out. For this data to be useful, store the results into a table on a regular schedule (I use 30 minutes). Then you can compare any two times slices to see how things were behaving when you thought you were having an I/O problem.
Table:
SET ANSI_NULLS, QUOTED_IDENTIFIER ON
GO
CREATE TABLE dbo.FileStats(
SampleTime datetime NOT NULL,
database_id smallint NOT NULL,
file_id smallint NOT NULL,
sample_ms int NOT NULL,
num_of_reads bigint NOT NULL,
num_of_bytes_read bigint NOT NULL,
io_stall_read_ms bigint NOT NULL,
num_of_writes bigint NOT NULL,
num_of_bytes_written bigint NOT NULL,
io_stall_write_ms bigint NOT NULL,
io_stall bigint NOT NULL,
size_on_disk_bytes bigint NOT NULL,
CONSTRAINT pkFileStats_SampleTime_DBID_FileID PRIMARY KEY CLUSTERED(SampleTime, database_id, file_id)
)
Job step:
-- Job Step 2 - other info: I/O statistics
INSERT dbo.FileStats(SampleTime, database_id, file_id, sample_ms, num_of_reads, num_of_bytes_read,
io_stall_read_ms, num_of_writes, num_of_bytes_written, io_stall_write_ms, io_stall, size_on_disk_bytes)
SELECT getdate(), database_id, file_id, sample_ms, num_of_reads, num_of_bytes_read,
io_stall_read_ms, num_of_writes, num_of_bytes_written, io_stall_write_ms, io_stall, size_on_disk_bytes
FROM sys.dm_io_virtual_file_stats(null, null)
Comparison query:
WITH agg
AS
(
SELECT database_id AS dbid, file_id AS fileid, SampleTime, num_of_reads AS NumberReads, num_of_writes AS NumberWrites,
num_of_bytes_read AS BytesRead, num_of_bytes_written AS BytesWritten, io_stall AS IOStallMS, io_stall_read_ms AS IOStallReadMS,
io_stall_read_ms AS IOStallWriteMS, datediff(minute, '1/1/2007 00:00:00', SampleTime) AS DateRef,
DENSE_RANK() OVER (PARTITION BY database_id, file_id ORDER BY database_id, file_id, SampleTime) AS rn
FROM dbo.FileStats fs
WHERE
-- You can choose between looking at a specific time range with the first SampleTime clause, or use the second to
-- view the last X minutes
--SampleTime BETWEEN '11/26/2007 08:00:00' AND '11/26/2007 14:00:00' --between '11/16/2007 00:00:00' AND '11/16/2007 00:45:00'
SampleTime > dateadd(minute, -90, getdate())
AND NOT EXISTS(SELECT * FROM sys.databases d WHERE fs.database_id = d.database_id AND (d.state_desc != 'ONLINE' OR d.name IN ('master', 'model', 'msdb')))
)
SELECT CONVERT(char(18), SampleTime, 100), db_name(dbid), type_desc, Reads, SvcTime, BytesRead, BytesWritten, BytesPerRead, Writes, BytesPerWrite,
ReadSvcTime, WriteSvcTime, ReadsPerSec,WritesPerSec, WriteBytesPerSec
FROM (
SELECT t1.dbid, --t1.fileid,
mf.type_desc,
CONVERT(smalldatetime, t2.SampleTime) AS SampleTime,
SUM(t2.NumberReads - t1.NumberReads) AS Reads, --t2.DriveLetter,
SUM(t2.NumberReads - t1.NumberReads) / avg(DATEDIFF(s, t1.SampleTime, t2.SampleTime)) AS ReadsPerSec,
SUM(t2.NumberWrites - t1.NumberWrites) / avg(DATEDIFF(s, t1.SampleTime, t2.SampleTime)) AS WritesPerSec,
SUM(t2.IOStallMS - t1.IOStallMS) AS WaitTime,
SUM(t2.IOStallMS - t1.IOStallReadMS) AS WaitTimeReads,
SUM(t2.IOStallMS - t1.IOStallWriteMS) AS WaitTimeWrites,
CASE WHEN SUM(t2.NumberReads - t1.NumberReads) < 1000 THEN 0
ELSE CONVERT(decimal(6,1), SUM(t2.IOStallMS - t1.IOStallMS)/ CONVERT(decimal(12,4), SUM(t2.NumberReads - t1.NumberReads))) END AS SvcTime,
SUM(t2.BytesRead - t1.BytesRead) AS BytesRead, SUM(t2.BytesWritten - t1.BytesWritten) AS BytesWritten,
CASE WHEN SUM(t2.NumberReads - t1.NumberReads) < 1000 THEN 0
ELSE CONVERT(decimal(6,1), SUM(t2.IOStallReadMS - t1.IOStallReadMS)/ CONVERT(decimal(12,4), SUM(t2.NumberReads - t1.NumberReads))) END AS ReadSvcTime,
CASE WHEN SUM(t2.NumberWrites - t1.NumberWrites) < 1000 THEN 0
ELSE CONVERT(decimal(6,1), SUM(t2.IOStallWriteMS - t1.IOStallWriteMS)/ CONVERT(decimal(12,4), SUM(t2.NumberWrites - t1.NumberWrites))) END AS WriteSvcTime,
CASE WHEN SUM(t2.NumberReads - t1.NumberReads) = 0 THEN 0
ELSE SUM(t2.BytesRead - t1.BytesRead) / SUM(t2.NumberReads - t1.NumberReads) END AS BytesPerRead,
SUM(t2.NumberWrites - t1.NumberWrites) AS Writes,
CASE WHEN SUM(t2.NumberWrites - t1.NumberWrites) = 0 THEN 0
ELSE SUM(t2.BytesWritten - t1.BytesWritten) / SUM(t2.NumberWrites - t1.NumberWrites) END AS BytesPerWrite,
SUM(t2.BytesWritten - t1.BytesWritten) / avg(DATEDIFF(s, t1.SampleTime, t2.SampleTime)) AS WriteBytesPerSec
FROM agg t1 INNER JOIN
agg t2 ON t1.dbid = t2.dbid AND t1.fileid = t2.fileid AND t1.rn = t2.rn - 1 AND t1.numberReads <= t2.NumberReads INNER JOIN
sys.master_files mf ON t1.dbid = mf.database_id AND t1.fileid = mf.file_id
GROUP BY CONVERT(smalldatetime, t2.SampleTime), t1.dbid, type_desc
) a
ORDER BY type_desc, dbid, SampleTime
-Eddie
Eddie Wuerch
MCM: SQL
May 16, 2011 at 5:53 pm
Hi guys,
Thanks for the replies. I've successfully installed the 2005 performance reports, lots of numbers in that lot so I'll need to have a good look into them, thanks for the links etc 🙂
Also thanks for the comments about separating out io_stall into io_stall_read and write. I've got a job pulling data from that DMV every hour but I wanted to see what the total was over a weekend when our weekly loads run. Using that example I provided above where over the weekend tempdb had a total of 798 hours of io_stall, 797 hours of that was made up from io_stall_write_ms over a 68 hour period.
Okay so that tells me (correct me if I'm wrong) for some reason tembdb is getting hammered during the weekend and it's performing poorly when writing? I guess the next logical step would be to examine the weekend hour by hour to find the time slot and then figure out what is running in that time slot.
Kind regards,
Chris
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply