Another RAID Questions...

  • Hi,

    I know I am going to open another can of worms with this but here it goes...

    Scenario:

    3 to 14 Transactional database. Total Data less than 60GB. Heavy transaction load with ordering system style data (no blobs, images, or binary data). Data gets archived into offline storage after certain time so data size remain around 60GB.

    If you had to configure a cluster where you have 14 drives available on a SAN for your data, how would you configure it? System will need to also have a drive/location available on these drives for SQL backups. Each drive is 146GB, so in most cases storage  capacity requirement is not that much of an issue.

    Here is what I have received so far:

    1. Create 7 RAID 1 drive sets and distribture databases, logs, and temp DB files between them.

    2. Create RAID5 of 5 drives (All Database Files, SQL Backups), RAID 10 of 8 drives for Logs, 1 Hot Spare

    3. Create 6 RAID 1 drive sets and distribture databases, logs, and temp DB files between them, 1 Hot Spare, 1 for Backups

    4. Create 3 sets of RAID 10 (4 Drives Each) for Databases, Logs, and Temp DB. 3 Drives RAID 5 for Backups.

    5. Create Single RAID5 of 13 drives, 1 for Hot Spare.

    I would prefer to have Full Recovery Model but if it is not achieveable with this many drives, the simple will do. In full recovery mode, my log backups can be up to 250GB a day. Also, most people recommended (including hardware manf, HP) that we use 64K stripe size while one individual suggested the use of 8K stripe size.

    Thank you for any recommendations.

  • hmmm, 69 viewers and no comments...

  • Number 4 is the best option you have listed.

  • Cool Thanks.

    Anyone else? Also, what about the stripe size (8 vs 64)?

    Thanks.

  • Sorry. Forgot the 2nd part of the question. 64K stripe.

    If you have the disk space.

    Have a RAID 10 LUN for each logical processor in the machine to spread the data files. ie: 2 dual-core procs equals 4 LUNs for datafiles. This will allow you to have 4 datafiles for each database one datafile on each LUN. Each LUN should be the same size and each datafile should be the same size to avoid hotspots.

    A RAID 10 or RAID 1 LUN for Log Files.

    A RAID 10 LUN for Temp db with a datafile for each processor core again.

    A RAID 5 LUN for your archive data.

  • According to BOL, your stripes should be 64k. (Same size as an extent)

    Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!

  • Thank you Chrispin.

  • Thank you.

  • I agree that option 4 is the best choice, with a 64K stripe size.  Don't solicit any more advice from whoever it was that said 8K.

    If I was absolutely determined to use full recovery mode with that setup, I might use a 3-drive RAID 5 for data (same capacity as 4-drive RAID 10) and increase the backup volume to a 4-drive RAID 5.  I wouldn't be happy about it, though.

    If you move the log backup files off the SAN several times a day you could probably stay under the 300GB limit of your backup volume.  You could also backup over the network to a file share.  You don't need SAN-class hardware to hold backups, is there somewhere you could connect a couple of 500GB SATA drives?

  • Thank you Scott for your reply.

    I like your recommendations regarding adding a couple of SATA drives for backup but unfortunately we have been burned in the past with SATA drive not having full-time-running life. Even though backup drive would not be doing a full-time service like a tempDB drive, but I rather stick with either SCSI or SAS drives. It also keep a single standard for replacements. Having said that, with SAS or SCSI, I am stuck with what I can have in a SAN or Shared Disk Array and if I want to go for something that is more than 14 drives, I will need to add a second drive cage. Also, with RAID 5 setups, I like to have at least one hot spare since these drives work a bit more than others.

    You comment regarding not listening to 8K advisor, I completely agree with you. I just wanted to confirm my information before I dismiss it completely.

     

    Thanks again.

  • I read through this thread and noticed that no one picked up on the supposition that the server was clustered.

    This goes into the area of the operating system as opposed to the SQL Server application itself.

    Best practice dictates that the Quorum drive/LUN be a dedicated cluster resource containing a RAID-1 mirrored pair. It should not have any other data on the disks and the cluster resource should not have any other resources or dependencies apart from the default standard. So you would have to find somewhere to hold a shared RAID-1 mirrored pair on shared storage even though this might seem wasteful it'll prevent your cluster going off-line.

  • Thank you Ashley.

    I was thinking of carving a sliver on one of the RAID 10 (Data) and use that for Quorum. Do you have any other configuration in mind?

    Thanks.

    Ashley Fawcett-Jones (9/26/2007)


    I read through this thread and noticed that no one picked up on the supposition that the server was clustered.

    This goes into the area of the operating system as opposed to the SQL Server application itself.

    Best practice dictates that the Quorum drive/LUN be a dedicated cluster resource containing a RAID-1 mirrored pair. It should not have any other data on the disks and the cluster resource should not have any other resources or dependencies apart from the default standard. So you would have to find somewhere to hold a shared RAID-1 mirrored pair on shared storage even though this might seem wasteful it'll prevent your cluster going off-line.

  • You could carve out a piece of your RAID 10 arrays or the RAID 5 array for the quorum drive. I'd recommend the RAID 10 array, just because it has more redundancy (it doesn't need the performance), but it would work either way.

    Brian

  • Brian Clark (10/1/2007)


    You could carve out a piece of your RAID 10 arrays or the RAID 5 array for the quorum drive. I'd recommend the RAID 10 array, just because it has more redundancy (it doesn't need the performance), but it would work either way.

    Brian

    Thank you Brian. This is what we do currently.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply