SQL/RAID

  • Hi,

    With RAID technology get popular cross the most H/W vendor. I'm a little puzzled and had conflict with our server group on the best approach to setup an heavy OLTP server (2 heavily used OLTP DBs with approx 50GB each, transaction could be more than 2 GB in an hour).

    The server came with 10 X 72 GB drives (on external cage).

    I want to have 3 sets of mirrored (2 for transaction log and 1 for tempDB) and the rest 4 disks to be RAID 5 to host the data portion of the database.

    The server group argured that 3 sets of the mirrored drive won't buy me any performance gain since they are coonect to the same SCSI card/channel.

    Is this true ? I was under the impression that separate set of mirrored drives should provide more disk spindle than single large partition,

    What will be the best way to configure disk drive that can yield best performance ? Is there any readings on this topic based on more recently technologyb ?

  • The answer is yes and no, there is a bandwidth limit , say 160Mb or 320Mb dep on the scssi bus but to be honest your chances of exceeding it with 10 disks are fairly remote. I'd actually say you don't have enough spindles but given that setup I'd pair a raid 1 for the transaction logs and make the rest into raid 5, although raid 5 isn't good if you set raid 10 you only get 4 effective spindles. Putting tempdb on less spindles than your databases can sometimes slow things down. Most SQL Server books have sections on disks, Inside SQL Server 2000, SQL Server Admin Guide.

    It's the spindles that give the performance, esp for an oltp. I usually have a 10 or 12 disk raid 10 for data, 5 or 6 disk raid 0 for tempdb & backups if they are copied away after backup and a raid 1 for transaction logs. binaries and o/s also on a raid 1. I have arranged the raid 10 over two scssi channels ( depends on server & card ) with other arrays on their own card or channel.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • For such a huge amount of write activity I would expect switching to Raid-5 would be a bad idea.  For write activity while each logical block write might require two writes (in Raid-0+1 for the shadow copy, in Raid-5 for the parity), the Raid-5 is going to require extra reads to accomplish the write (to calculate the parity).

    Also, you might look at the SCSI controller you bought for this.  Is it really a single channel controller as you imply the server guys said?  Often those external shelves can have a split bus and you could hook a channel to each end, then split the mirrors on separate scsi channels.  Might or might not help, unlikely to hurt.  But if this is the embedded raid controller of a lot of systems it's probably pretty wimpy especially in terms of cache.  Ensuring you have a good raid controller and max cache might help a lot of write activity especially if bursty.

  • NetIQ did a bunch of testing a couple years back and found after aroud 8-9 drives, you lose efficiency and performance in RAID 5 sets. So I wouldn't go there. If you have afford it, 1+0 works great, but costs $$.

    Depends on how busy the system is. RAID 5 works for many systems, but if you are really heavily loaded on writes, than I'd look to 1. And I'd be sure to physically separate the logs fron data. Separate devices, but also separate SCSI channels if you can.

  • I agree if you are front loaded on writes raid 0+1 is the way to go. At 14 drives configured raid 0 I would expect you could hit the limit of the SCSI or PCI bus on the server at around 280Mbs If you throw raid 5 into that mix your speeds come way down. Look at it this way Raid 5 takes 2 operations to read and 4 operations to write. Raid 0 is 1 op to read 1 op to write. Raid 1 is 2 ops to read and 2 ops to write raid 0+1 is 2 ops to read 2 ops to write. So to get Raid 5 to write as well as raid 1 or 10 you have to have double the number of spindles to share the write load. As Steve pointed out this is diminishing returns at or around 10 drives the controller just can't keep up at that point. I would look at the Microsoft book sql 2000 performance tuning it goes into great detail about raid levels number of I/O's you need to meet your operational loads.

    Short answer - 14 drives RAID 5 maybe able to fill a U160 or even a U320 SCSI bus but won't be able to sustain the number of I/O's you may need. Breaking up the array will buy you speed for the logs and tempdb. I always try to split tempdb onto raid 0+1 and logs onto 0+1 for the simple fact they are faster and handle higher I/O's better. Also, logs read and write in sequential blocks so keeping them on their own spindles will improve performance by keeping disk contention down. Ok not so short...

     

    Just my thoughts.

    Wes

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply