November 15, 2012 at 3:32 pm
Running some tests using IOMeter (preferred the GUI to this one over SQLIOsim) and using some of the predefined options here's the results:
ON a 12-disk, RAID 5, 1 hot-swappable spare, this uses 1 worker, 40,000,000 sectors (20GB file)
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 121,432
Total MB/sec: 479
Average IO Response Time: 0.13
MAX IO Response 1.85
CPU: 26.5
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 58,294
Total MB/sec: 227
Average IO Response Time: 0.27
MAX IO Response 86.9
CPU: 12.1
MAXIOPS with 90% read, 10% write ratio - uses 4KB transfer request size, 90% reads, 10% sequential dist.:
TOT IO/sec: 7000
Total MB/sec: 90
Average IO Response Time: 2.28
MAX IO Response 37.2
CPU: 27.6
I don't really know if this is good or not, when i ran it against on of the other RAID 10 arrays the numbers were much slower...so i assume this is good - Is this considered ok?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 15, 2012 at 3:42 pm
MyDoggieJessie (11/15/2012)
Running some tests using IOMeter (preferred the GUI to this one over SQLIOsim)
Errm, for benchmarking your I\O capacity (which is what you're doing here) you need to be using SQLIO or IOMeter.
SQLIOsim is specifically for immitating SQL server I\O patterns to stress test your storage solution 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 15, 2012 at 7:59 pm
Sorry, I did mean SQLIO - but for these tests I am only using IOMeter.
Results below for RAID 5
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 121,432
Total MB/sec: 479
Average IO Response Time: 0.13
MAX IO Response 1.85
CPU: 26.5
MAXIOPS with 90% read, 10% write ratio - uses 4KB transfer request size, 90% reads, 10% sequential dist.:
TOT IO/sec: 7000
Total MB/sec: 90
Average IO Response Time: 2.28
MAX IO Response 37.2
CPU: 27.6
I am very surprised with the outcome of the RAID 10, as I thought the performance would be slower:
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 69,445
Total MB/sec: 271
Average IO Response Time: 0.23
MAX IO Response 10.77
CPU: 13.08
MAXIOPS with 90% read, 10% write ratio - uses 4KB transfer request size, 90% reads, 10% sequential dist.:
TOT IO/sec: 23,565
Total MB/sec: 92
Average IO Response Time: 0.68
MAX IO Response 49.9
CPU: 3.02
So it would seem to me that in a RAID5, the more writes you have, you lose IOPS due to the performance hit of maintaining parity across the disks, the main benefits of a RAID 5 would be if and only if you could be sure the writes were kept to an absolute minimum or for a read only file altogether.
At our company, we will have replicated data being written to this new storage array (along with other minor writes from custom tables used for reporting, etc)...so I would think that we wouls ee the best performance in keeping this new array a RAID 10.
Would you agree with this?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 16, 2012 at 5:38 am
you need to benchmark for a little more than 4KB sequential even for the T-log. Test for the following
8KB random read and write
4-60KB sequential write
64KB sequential read and write
up to 128KB sequential write
up to256KB sequential read
Despite the advances in RAID5 i would expect a well defined RAID10 array to have superior write performance. I'm sure others will jump in here.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 16, 2012 at 5:45 am
Again as mentioned above this storage array will only be used for data and index files (NO log or tempdb files)
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 16, 2012 at 6:08 am
MyDoggieJessie (11/16/2012)
Again as mentioned above this storage array will only be used for data and index files (NO log or tempdb files)
in that case for data and index files you'll definitely need to test these
8KB random read and write
64KB sequential read and write
up to 128KB sequential write
up to256KB sequential read
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 16, 2012 at 6:14 am
Perry Whittle (11/16/2012)
Despite the advances in RAID5 i would expect a well defined RAID10 array to have superior write performance. I'm sure others will jump in here.
You've mentioned this before "well defined" - how can there be a well vs. poorly defined RAID? What factors would need to be considered to ensure our Tech Services people have configured it properly?
Regarding the write performance, RAID 10 will always beat out a RAID 5 due to the way the parity needs to be kept across all the drives, the more spindles, the more overhead needed to write to each drive, whereas in the 10, there's less drive heads to spin up that need writing to - isn't this correct?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 16, 2012 at 6:26 am
8KB random read and write - 90% read, 10% write ratio
TOT IO/sec: 2245.5
Total MB/sec: 17.6
Average IO Response Time: 7.22
MAX IO Response 153.3
CPU: 30.9
64KB sequential read and write - 90% read, 10% write ratio
TOT IO/sec: 6337.5
Total MB/sec: 396.1
Average IO Response Time: 2.52
MAX IO Response 177.4
CPU: 36.8
up to 128KB sequential write - 90% read, 10% write ratio
TOT IO/sec: 3598
Total MB/sec: 440.7
Average IO Response Time: 4.75
MAX IO Response 206.7
CPU: 14.3
up 256KB sequential read - 100% read ratio
TOT IO/sec: 4479.3
Total MB/sec:1118.4
Average IO Response Time: 3.59
MAX IO Response 12.7
CPU: 22.81
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 16, 2012 at 6:28 am
the stripe size specifically, the optimal stripe sizes for SQL Server are 64KB and 256KB.
As i mentioned above you need to vary your tests as data files are typically 8KB random heavy.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 16, 2012 at 7:51 am
Verified that the strip-size is 64k.
I think for the next few days I am going to try running it as a RAID 5, then move things around, redo the array to RAID 10 and compare the results, this way I can more effectively gauge performance.
I'll post back then with some results
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 16, 2012 at 9:32 am
Perry Whittle (11/16/2012)
the stripe size specifically, the optimal stripe sizes for SQL Server are 64KB and 256KB.As i mentioned above you need to vary your tests as data files are typically 8KB random heavy.
Considering a situation where the logs, data and tempdb files are on seperate drives, what stripe sizes are suggested for each?
November 16, 2012 at 3:25 pm
I usually use 256KB for data and backup drives and 64KB for log files
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 12 posts - 16 through 26 (of 26 total)
You must be logged in to reply to this topic. Login to reply