May 2, 2014 at 1:06 pm
Hello,
If SQL server writes mostly in 64k blocks why would one want to test with any parameter other than -b64?
Thanks
Scott
May 2, 2014 at 2:13 pm
I should know better by now than to post a question on a friday afternoon...:-)
May 2, 2014 at 11:10 pm
Depending on the action (backups, transaction log writes, etc.), SQL Server has several different types of IO sizes, so it is worth testing different block sizes.
I test with block sizes of 8K, 16K, 32K, 64K, 128K, and 256K for sequential read, sequential write, random read, and random write.
Depending on the disk size, I use a test file size of at least 50GB to make sure I am overcoming the impact of disk controller or SAN caching.
I am usually most interested in the 64K random read and random write performance, because many IO systems struggle to get good performance with these. I set up a new server recently, and saw a huge 64K random read and random write performance improvement when I reformatted a volume from the default 4K cluster size to 64K.
May 3, 2014 at 5:32 pm
Ok, thanks much.
Scott
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply