February 6, 2007 at 7:10 am
Will,
I think the cost question is one you need to ask your supplier(s) - depends on what discounts you get, etc.
Purely from what I've read, if you're needing 1TB of space, and read/writes are 50:50, then I would guess that you'll need to look at more than just space - unless performance isn't a concern. In my experience, I have found that disk IOs and latencies are more important, and when I've done the calcs I often have much more space than I need (based on the right throughputs and IOs that I need for performance).
On tests we have carried out, we have found that well configured RAID 10's have out-performed RAID 5's hugely - that is not based on heavy read db's only. We have seen horrendous disk write (random) latency times for RAID 5 of greater than 150 ms (average) and a throughput of 3.8 MB, whereas its equivalent on RAID 10 registered average latency of 12 ms and throughput of 37.8 MB. As you can see, are rather large difference. Again though, you may not get the same figures and you would need to test for yourself. A couple of useful tools are SQLIO (MS), IOMeter, and SQLIO_Sim (MS).
rgds iwg
February 6, 2007 at 8:30 am
I blogged the diffs in raid performance ( see my earlier post ) ,
http://sqlblogcasts.com/blogs/grumpyolddba/
largest scssi disks currently are 300gb so for 1 TB you'd need 8 disks in raid 10, just bought a new storage array which holds 12 spindles ( so I could get 1,800Gb in raid 10.) Can't advise on cost, the array we bought was around £10k using 12 x 15k SAS disks.
It's the write performance which suffers with raid 5, it's actually a 75% degredation ( sorry Ian ) , this was very well demonstrated at a previous client where a crm app ran on a data drive of 4 spindles. In a server upgrade ( which doubled the procs and ram ) the data array was set raid 5 instead of raid 10 , the application was actually made unusable, the DBA who supervised the rebuild didn't check having assumed the same raid would be kept as the existing sevrer. On the monday morning the business came down and said we'd have to regress as they couldn't use the system - I checked and we had raid 5. An emergency rebuild of the array and we were back in business. I plan to do some tests around performance as I've just personally purchased two storage arrays to use on my home system ( for testing ) When I've done this I will publish the results.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 6, 2007 at 9:17 am
We gotta go with Dell. We are consultants and the company we deal with will only allow Dells on their site.
Just curious, how long has RAID 10 been commonplace? It seems that all of the vendors would push for RAID 10 due to the higher number drives it requires, hench more sales.
I assume that isn't possible to convert an array from 5 to 10 without having to reformat the drives? I will look into if my newest RAID controller that we bought is capable of 10.
I didn't realize that there was that big of a performace increase from 5 to 10.
February 6, 2007 at 9:41 am
No problem Colin. I should have said that I was refering to Writes really, ie RAID 10, 2 times IO for every Write, and for RAID 5, 4 times IO (from SQL Server 2000 Administrators Companion book).
Can see 75% difference because, though the book says 2 x's Writes, surely its more down to the write of the slowest disk as they both write in parallel and not serially. I have often wondered about this (quite sad I know). Have seen info (in same book, pg 104) that suggests if write ratio is anything more than 10% then RAID 10 starts to out-strip RAID 5, and the performance gap increases as the pecentage of writes to reads increases.
Just seen your blog, btw, which is good stuff
rgds iwg
February 21, 2007 at 4:04 am
Ok, thought about my previous response and I'm more thinking about outright performance based on time - there is still 2 write io's for RAID 10.
That said though Colin, I'm not sure how your figures translate to a 75% perf degradation for RAID 5 against RAID 10. I've looked at both your blog and MS SQL Server 2000 Performance Tuning Technical Reference and both suggest that at 100% Write io's that the io rate for RAID 5 was about half that of RAID 5 - your figures for Random Writes (15k RPM): RAID 10 = 280 io's, RAID 5 140 io's, both for a 4-spindle array set. The book shows the results in a slightly different way, by showing that it requires twice as many io's to push the same load through RAID 5 as it does through RAID 10. Would be interested how you've arrived at the 75% degradation, as maybe I've missed something crutial.
rgds iwg
February 21, 2007 at 10:10 am
Bah Colin!
I dont disagree (at least too much) that RAID5 imposes a perf hit, but the im my view is IT IS the best bang for the buck. Not everyone can afford RAID10. I've run several systems on RAID5 for years and managed well, both on SCSI and an EMC SAN.
February 21, 2007 at 12:19 pm
I'm just biased , brought up on very high performance oltp databases, back in sql 6.5 I was using multiple raid 10 arrays. My last three contracts all have databases that are performing badly due to disk write latency due to the raid 5 overhead, when I say bad this equates to making the application almost unusable under load.
As for the degredation vs raid 10, well an array of 6 spindles in raid 10 presents 3 spindles as a usable array, so as the raid 10 requires 2 io per mirror , there are either 3 x 2 io or 6 x 1 io. either way it's actually 1 io against the 3 spindles, the second io goes against the mirror.
Raid 5 will require 4 io for each write, so which ever way you look at it raid 5 has a terrific overhead. In terms of supported io it means for an equal number of ( available ) spindles in the array , think capacity here, the raid 10 supports 4 times the throughput for writes.
I've actually bought myself two external ( scsi ) storage arrays with hardware raid cards and I'll be running a whole range of tests for normal sql operations, backups, restores, data loads etc. which I intend to run against a number of configs. I have 20 available scsi disks so should be able to get some comparable results. Don't expect this too soon as I expect it to take some time !! I have an 8gb of data database with some reasonable tables so tests should be pretty good, I hope to run on a dual core and a dual proc machine.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 21, 2007 at 12:35 pm
And Im jealous, I'll have to plan on working for a better grade employer that will get me RAID10 next time:-)
My current employer tends to be really cheap (oh wait, I work for myself...)
February 22, 2007 at 3:51 am
Ahhhh, ok Colin, I think I see where you're coming from now. I was going on pure spindle-count comparison, i.e 4 spindles in RAID 5 array and 4 spindles in RAID 10 (2 x mirrored pair). I think you've refered to the RAID 10 as 8 spindles (4 x mirrored pair).
Hey, Andy. Maybe Kelem Consulting might have an opening for ya...sounds like they'd be a good company to work for
February 22, 2007 at 6:10 am
It's funny back in the days of small disks , 9gb for instance, there wasn't so much of an issue of raid 10 , one of the comments Jim Gray made when I met him was about the forthcoming 1TB disks, he asked how he would explain wanting 20 x 1TB disks for a 200gb database ??
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply