Hardware & RAID Configuration - What''s the best

  • Can you offer any proof for this statement:

    "A single disk suffers a 75% degredation on io performance on raid 5 compared to raid 10 ( or raid 1 ) for writes."

    I don't really agree with this statement:

    "20 disks in a raid 5 should be compared to 36 disks in a raid 10"

    I think it is equally valid to say that you should compare on an equal cost basis to see which gives the best IO performance for the same money.

     

     

  • Mr. Jones:

    Regarding the second point, the argument was that an apples to apples type of performance comparison should be made to 'prove' the relative efficiencies of which RAID is 'faster'. Yes, it is valid to consider cost. But what is the value of slow response? How much does slow response cost the business? And, if the performance is EXTREMELY slow, then is cost still a consideration? Cost/performance measurement is a valid measure, but if performance degrades beyond a certain level, cost becomes moot. It's a wonderful balancing act that provides years of job security (i guess....))

    Mr Smith

  • As I stated in my post:

    "I think it is equally valid to say that you should compare on an equal cost basis to see which gives the best IO performance for the same money."

    The point is that you may be able to get more IO performance for the same money with RAID5.  For example, it might be possible for three 6 disk RAID5 arrays to provide better total IO performance than one RAID 10 array using the same 18 disks.

     

  • When I set up a server about 2 years ago, and had Googled this subject, the consensous at the time was RAID 5 for data drive, RAID 10 for log drive.  In retrospect, the only thing I wish I had done differently was to point tempdb to the RAID 10 drive.

    As to performance, RAID 5 (by opinion voiced in this thread) is bad, but a poor design is more likely to get you.  Something as simple as a flaw in a clustered index design could kill all of the performance gain you might get with RAID 10.

     

    Beer's Law: Absolutum obsoletum
    "if it works it's out-of-date"

  • Well said

     

    "Something as simple as a flaw in a clustered index design could kill all of the performance gain you might get with RAID 10."

    Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!

  • a raid 5 has to do 4 io for each write :: raid 5 is rubbish for writes. I fail to understand why we constantly post about raid 5 vs raid 10. If you want your system to perform use raid 10, if you have read only filegroups/databases then raid 5 is the correct solution.

    incidentally 4 spindles configured as raid 10 will outperform the 4 spindles configured as raid 5 by 2 to 1 on writes, but may perform less well on reads, depends upon the controller.

    For a business the additional cost of some extra spindles to gain both performance and redundacy, raid 5 has a higher probablility of failure over raid 10, is so insignificant.

    I recently had an "discussion" about using 4 or 6 disks to achieve a 900gb array for a database - the cost of 2 disks, about £600 - a drop in the ocean for the company, maybe two tyres for a director's car, one night's accomodation cost for the marketing director on a sales trip? Lets get these costs into perspective.

    If your index design is bad enough to offset the performance gain of raid 10 then it'll be 4 times worse on raid 5, sorry but that is such an absurd argument.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • "If your index design is bad enough to offset the performance gain of raid 10 then it'll be 4 times worse on raid 5, sorry but that is such an absurd argument."

     No duh!  I wasn't trying to say otherwise.  What I was attempting to point out is all the debate of milliseconds saved or not by the RAID you are using is meaningless if you don't constantly pay attention to the design of the database.

    Arguing over the money spent on one approach or another seems unimportant if the database is not correctly designed, implemented, monitored, and supported.

    Beer's Law: Absolutum obsoletum
    "if it works it's out-of-date"

  • agreed, sorry, but probably most DBA's get to maintain/support databases applications they have no control over, I mostly tune third party apps where I can't influence or change design. Getting the hardware right is a critical point, generally you'll read about the 20/80 rule of tuning ( hardware/software ) , however this assumes the hardware is right in the first place!  Without doubt over the past 5 or 6 years raid 5 has been a major cause of poor performance at client sites I've worked. There's only so much you can do with hardware but believe me for an oltp database which has some level of throughput, let's say over 250 transactions/sec on the perfmon counter, you'll find raid 5 can be a serious bottleneck. You should analyse your throughput of course, if I remember correctly microsoft state that is the writes are over 10% of total io then you should consider raid 10.

    Unless you've physically compared the raids then you'll not appreciate the difference - I hope to publish some stats on this some time soon.

     

     

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I was thinking about stuff along these lines last night after posting my reply.  My application is fundamentally a data warehouse.  It loads up bulk data on an hourly basis, runs some reporting processes (aggregation tables, etc) and the rest of its usage is web based reporting.  The throughput of the RAID is not nearly as critical (I would think) in this scenario as compared to a transaction heavy system.

    When you are chasing down the statistics, I would be curious (if you can commit time and effort to it) to see where caching comes out in the results.  If a (correctly) cached drive array of RAID 5 and RAID 10 are compared, does the difference narrow between them?  I suspect it would.

    I suspose this is something that might be available via tpc.org, I just haven't had the time to check.

    Beer's Law: Absolutum obsoletum
    "if it works it's out-of-date"

  • I always love to read extended posts on the virtues or lack thereof concerning RAID 0,1, 5, 1+0 & 0+1. Things always boil down to site standards, performance, money. I've used mostly RAID 5 in the past. A few times I have had the luxury of RAID 1 alone or RAID 1 coupled with RAID 5. Now I am getting into RAID 1+0 exclusively. All of these RAID permutations have been used both on a SAN and on local disk with databases ranging in size from less than 1 Gb to 1/2 a Tb. I have yet to run into an environment where RAID 5 was a performance drag, even in the internet stock trading environment where 10 threads were performing in excess of 150 business transactions a second concurrently on a local RAID 5 array of 3 disks with everything on the C: drive !

    I dunno, maybe I have been lucky to be blessed with phenominal hardware and excellently designed databases and applications.

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

  • Every time I read a thread regarding RAID levels/SAN/NAS, etc. I always wonder what the percentages of us that run on various storage technologies is - as RAID level, performance, etc. is largely dependent on the underlying storage technology.

    Out of curiosity more than anything I created a simple poll (found a free poll site as this site doesn't seem to have a user poll) that I'm interested in seeing the results of, I'd appreciate any responses (and no I'm not a schill for a storage vendor):

    Poll page:

    http://www.yourfreepoll.com/tfvgzdzysp.html

    Results page:

    http://www.yourfreepoll.com/tfvgzdzysr.html

    Thanks,

    Joe

     

  • I first used raid 10 with sql server 6, so you can tell it was a while ago! I do have practical experience of raid 5 dragging down performance so I guess I am biased. Cache does nothing really as once the cache is full then you're slowed to the speed of the disk writes again! High performance disk systems attempt to alter io using cache to gain performance, but all this happens in ms so you'd probably not notice. Back in the days of sql 7 the compaq engineers used to disable cache to get performance, and generally read cache will prove detrimental to performance. I only have two external raid boxes, 20 or 22 spindles total and my hardware raid cards are not state of the art although they do have cache - I plan to produce camparitive values - should be in the next month.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I agree that you get better performance with RAID 1 than with RAID 5 - but there is no way its a 75% detriment - there are (certainly not in a HP SAN environment anyway) all our tests come through with about a 25% reduction in performance in an OLAP set up for which you gain nearly 30% diskspace..

    Never tested it to this level but the difference in performance seems to diminish as the DB gets bigger...according to HP

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply