December 23, 2009 at 2:03 pm
What is the block size for SQL Server reads and writes? SQL Server data is stored in 8KB pages with 8 pages per extent. Does SQL Server read in different block sizes than what it writes? We are trying to determine if it is necessary to run SQLIO with block sizes of 8KB, 16KB, 32KB, 64KB, 128KB and 256KB or should we only run with 8KB and/or 64KB?
When reviewing the IOs/sec results should those results be divided by the total number of spindles in an array or only the usable spindles? For example, with a RAID 10 consisting of 18 + 18 drives, would we divide the IOs/sec for a write test by 18 or by 36. For a read test would we divide by 36 since both sides of the mirror are read in a RAID 10?
Thanks, Dave
December 23, 2009 at 3:21 pm
To cover the first part of your question SQL retrieves data in extents, which are 64K. This is why it is best to format your data drives with a blocksize of 64K.
If you are trying to mimic an existing system with SQLIO run a pefmon using avg disk bytes\read and avg disk bytes\write to give you an idea of transfer sizes. In my experience 4K, 8K and 64K cover it.
---------------------------------------------------------------------
December 23, 2009 at 3:32 pm
Thanks. I'll do that. Does this mean there is no specific block size for reads vs. writes? It can be anywhere from 8, 16, 32, 64, 128, etc..., I wasn't sure if it would follow the extent size (64KB). Our drives are formatted at 64KB.
Dave
December 23, 2009 at 4:52 pm
You should get mostly 64K read and writes on your data drive. Activity such as db backups can be in larger chunks.
activity on log drives would be mostly 4K sequential writes, tempdb 8k random reads and writes.
the above are not hard and fast rules just what I have observed.
When I used SQLIO I didn't test above 64K.
---------------------------------------------------------------------
December 23, 2009 at 6:31 pm
I test all drives with 8KB, 16KB, 32KB, 64KB, 128KB and 256KB block sizes for sequential read, sequential write, random read, and random write for a total of 24 tests on each drive.
I use at least an 8 GB test file to make sure it overcomes the effect of disk cache on the tests. I run it with 16 IO threads that allow 8 pending IOs each to produce a disk queue length of 128. I use a standard command script to run each test suite and save the output to a text file.
Many people suggest that you run each test for several minutes, but I've found that 20 seconds for each test gives results that are not much different than longer tests. With a 20 second run time for each test, the tests for a disk completes in about 9 minutes after including a dummy test at the beginning to create the test file.
December 24, 2009 at 7:43 am
We're running our tests for 120 seconds per test as follows.
Random Write - 1 thread 1 outstanding IO 8KB -BH -LS
Random Write - 8 thread 1 outstanding IO 8KB -BH -LS
Random Write - 1 thread 8 outstanding IO 8KB -BH -LS
Random Write - 8 thread 8 outstanding IO 8KB -BH -LS
We then repeat the same set of tests for 16KB, 32KB, 64KB, 128KB, 256KB
We repeat all of the above for sequential writes, random reads and sequential reads.
This is done for the data drives. For the log drives we only concentrate on sequential reads and writes. Our first set of tests for all data, log and tempdb drives took 22 hours. We also made sure that each test file was at least the size of the database files.
We've been working with someone from the Microsoft Performance Team, but they don't have much documentation or many recommendations on how to use SQLIO or interpret its results. They are now trying to determine what block sizes pertain to reads and writes and whether or not you factor in all spindles in a RAID 10 for write activity or just those on one side of the mirror.
Dave
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply