September 17, 2009 at 8:03 am
If money was no object would you go Raid 1+0 i.e. Raid 10 across all drives OS,System,data, Log,Backups?
September 17, 2009 at 8:29 am
Not necessarily. Firstly, I've never really seen a case where RAID 10 for the OS and system drives are necessary- so RAID 1 should be sufficient. And if your RAID 1 array for an OS or system drive is not able to cope with IO demands on those disks then something's not right.
Secondly, just because I've got x amount of money to spend doesn't mean I should spend it for the sake of it - I still think it's essential that you spend accordingly, which means identifying the current IO demands of the system and extrapolating these into the future to allow for future capacity.
If your IO demands are such that a RAID 1 array or a 3-disk RAID 5 array will meet your demands then why spend extra money on a 6-disk RAID 10 configuration? It won't make things any faster.
So if we assume that a decent set of disks today are capable of handling 200 IOs per second and your system isn't incurring more than 50 IOs per second then a RAID 1 array should suffice.
You also have to look at the types of IOs. If most of your IOs are reads then a RAID 5 array may be just as efficient and cheaper than a RAID 10.
Oh, and if money is really no object and you really need all the IO bandwidth you can get your hands on, take a look at solid state disks:cool:
September 17, 2009 at 8:53 am
SQLZ (9/17/2009)
Secondly, just because I've got x amount of money to spend doesn't mean I should spend it for the sake of it - I still think it's essential that you spend accordingly, which means identifying the current IO demands of the system and extrapolating these into the future to allow for future capacity.If your IO demands are such that a RAID 1 array or a 3-disk RAID 5 array will meet your demands then why spend extra money on a 6-disk RAID 10 configuration? It won't make things any faster.
.
I understand, I am looking to build a system that will last well into the future. But unfortunately we are are in the situation where the money is there now but may not be there in the future. It's the reality of the situation. We may be able to go with Raid 50 as well, is this likely to offer any benefit over Raid 10 ?
September 17, 2009 at 9:50 am
I see your dilemma - you want to spend all the money you've been given now.
Let's assume that you are going to go with RAID 10 for a minute. Even in this situation you're going to have to do some benchmark analysis in order to determine how many disks you're going to put into each RAID 10 array ( will you have a 4-disk RAID 10 array for your data drive or a 30-disk array?). Same goes for your log files and your tempdb files.
So for starters, using perfmon, start capturing data on your reads/sec and writes/sec for the physical disk counters. Hopefully you've already got your log, tempdb and data files on seperate disks so that you can easily determine the demands for each of these. If your tempdb database is on the same disk as the other system drives you can gurantee that any disk activity on that disk is entirely down to tempdb.
Now once you've got your reads/sec and writes/sec, multiply the writes/sec by 2 for RAID 10 and RAID 1 (because each write incurs a second write) then add your writes to the number of reads/sec. For RAID 5 you have to multiply writes by 4.
So, to give you an example. Let's assume you've captured reads/sec and writes/sec for your data drive and you get 300 reads/sec and 400 writes/sec.
Total IOs = 300 + (400 x 2) = 1100 IOs per second.
If one of your disks is rated to provide 200 IOs per second you therefore need: 1100 / 200 = 5.5 disks. Obviously you have to round this up to 6 disks (in a RAID 10 array) in order to handle 1100 IOs per second.
Repeat this for your tempdb file placement, log file, backup files, etc....
Note that whatever your disks are rated at is probably sequential IOs. Take away a bit to allow for the fact that most IOs (apart from things like Log files) result in random IOs and take away 20% to allow for the fact that the ratings are theoretical. So if your disk is rated at 250 IOs per second then the practical rating is around 200 IOs per second and to err on the side of caution I would use a practical rating of 180 IOs per second. The more a disk gets full the worse it performs and so on.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply