May 18, 2011 at 12:33 pm
Im measuring Disk reads/sec and disk writes/sec to get a basic picture of my drive performance.
Where im having problems is understanding the scale in perfmon.
I know i can set the counter to be multiplied by a number ( eg multiply by 0.001 if numbers are way too big etc). So, perfmon will show the measurment on any scale you want , but what i want to see is realistic numbers to my iops.
So, i guess i need to know what is the standard scale that IOPS for a drive are measured in ( thousands per sec, tens of thousands per sec?)
May 18, 2011 at 1:04 pm
Best case for physical disk drives: 100-150 IOPS per drive.
SSDs put up much bigger numbers; individual drives hitting 100,000 IOPS or better on the right benchmarketing loads.
What is more important is the service times:
ms/read - should be less than 20ms/read
ms/write - should be less than 2ms/write
-Eddie
Eddie Wuerch
MCM: SQL
May 18, 2011 at 1:46 pm
ha! welcome to a world of pain with figures so largely meaningless you can almost prove anything!!
An actual iop is a calculation based on the rotational speed of a disk or a figure claimed by the manufacturer for SSDs.
The figures vary for sequential and random operations. The size of the data block being read or written is also very imprtant. Most SSDs quote 4kb iop sizes, but sql server doesn't use 4kb, in read ahead it read up to 1024kb ( maybe more now ) the larger the block the slower ssds perform.
Then you might have the iop capacity but you have to get the data to the disks/drives = bandwirth, so more calulations. Then your server may never be able to physically process the data to achieve those sort of throughput figures - you get into all sorts of stuff with files, processors, cache which muddies the water more.
Then there's lots of tools that can show you get all sorts of performance until you try a backup or restore!
There's too many to catalogue but I've a number of posts about storage on my blog and some benchmarking tests you can use on my website.
I have the honour of being able to swamp the backplane of an expansive san with io and bring a number of applications to their knees! ( e.g. shared storage isn't always good < grin > )
The sql 2000 performance tuning handbook had a couple of good chapters about iops.
I personally expect latency for writes on a t-log to be almost zero and I look for data writes to be under 5ms and reads to be very quick. I would complain at 20ms latency ( but it does depend upon your storage and application )
I did some scaling tests with a major san vendor and I was hard pushed to hit 60,000 iops from a single sql server ( using sql no tools ) , I was trying to swamp 8GB of fibre bandwidth - didn't make it!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply