May 22, 2009 at 3:39 pm
I have a period of time that there was over 50,00000 io within the time.
Roughly how much is this is GIG of data.
Thanks
May 23, 2009 at 4:00 am
Where are you measuring those IOs from?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 23, 2009 at 9:09 am
SQL tool shows this value which is very high to me.
May 23, 2009 at 9:46 am
Which SQL tool?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 23, 2009 at 10:11 am
TRACEY (5/22/2009)
I have a period of time that there was over 50,00000 io within the time.Roughly how much is this is GIG of data.
Thanks
I am not sure it will be 100% accurate but if you use perfmon counter Physical disk:avg.disk bytes/transfer for the disks involved and times that by 5,000,000 it should give you an idea (readings taken during the same time period)
Else you can multiply Physical disk:avg.disk bytes/transfer by Physical disk:avg.disk transfers/sec to see how much data is being pumped through.
Also there is sys.dm_io_virtual_file_stats or good old fn_virtualfilestats will give you figures on i/o activity (including data quantities) on a per database file basis
thats where I would look anyway and would be interseting to see how the figures compared to your 'tool'
---------------------------------------------------------------------
May 23, 2009 at 10:13 am
R u using some third party Sql tool or Perf Counters or using some query from management studio to get that output or something else? What kind of drives do you have in your system(SCSI or IDE). what kind of RAID environment do u have.
May 23, 2009 at 8:01 pm
SQL Idera tool.............gives these measurements. There only two disk on SAN being used Data and Logs so i need to determine from SAN Guys what raid. I need to see how many luns and spindles to determine the number of IOPS that were produced and whether the SAN can handle this amount as the queue length is also very high too.
I will measure this multiply Physical disk:avg.disk bytes/transfer by Physical disk:avg.disk transfers/sec to see how much data is being pumped through and then compare with the IO reported on tool that will be very interesting to see.
Good idea.
May 24, 2009 at 2:00 am
TRACEY (5/23/2009)
SQL Idera tool.............gives these measurements.
Check it's documentation to see if you can find what that's measuring in. It may be bytes, disk sectors, SQL pages, or a few other things.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2009 at 4:30 am
A great way to measure I/O statistics is to use the dynamic view sys.dm_io_virtual_file_stats which will give you detailed statistics server-wide. (Our SAN people use data collected from this view regularly).
One great advantage of collecting statistics this way is that everyone here has access to the documentation for it, and can therefore help more easily with any questions you might have.
See http://msdn.microsoft.com/en-us/library/ms190326(SQL.90).aspx for further details.
(SQL2K users can use the fn_virtualfilestats function instead, which is still available in 2K5 but the DMV is preferred).
Cheers,
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 24, 2009 at 8:20 am
Is this for the entire length of the shutdown of server
dynamic view sys.dm_io_virtual_file_stats
So if not shutdown for 60 days have to divide this amount by 60 or so.
May 24, 2009 at 11:57 am
It's cumulative since the server started. I wouldn't just divide by the number of days, that assumes that the IO load is identical every day (which it may not be). What I would suggest is that you take 'snapshots' of that table each day, inserting the values that are of interest into a table, along with the date. Then you can do the required maths to see the what the IO stats were each day.
If you want to see at a more granular level, then run the inserts more often.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2009 at 12:59 pm
Thanks that a good idea. Is there a way to clean them out without a restart of SQL Server.
May 24, 2009 at 1:28 pm
Not that I'm aware of.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2009 at 5:17 pm
Tracey,
We take snapshots every five minutes as a matter of routine. When collecting data for a SAN upgrade for example, we capture snapshots every 10s for 24 hours on our peak days of the week.
Sadly there isn't a reset button for the stats via DBCC SQLPERF or anything which can be a pain when the numbers 'roll over'.
Cheers
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply