Raid System

  • What is the best way to install SQL Server on window 2000 operating system using raid system?

    Is there any article out there?

  • Put quite simply:

    The more distributed your IO across available disk platters the better, balanced against needed redundancy.

    I like RAID 5 for my data, Raid 10 for my logs and Tempdb, and the more platters I can get in my RAID drive arrays, the better.....for example, I would take two 2GB drives over one 4GB drive, two 4GB drives over 1 8GB drive, etc...

    Also, channel distribution should be taken into consideration. Try to balance your IO across your channels if you have multiple channels in your adapters or multiple adapters.

  • Overall, depends on your hardware as to what you can do.

    Best concepts are to put all data and log files on seperate drives from each other (log files have a high number of writes). Also, configure a RAID 10 as oppossed to RAID 5 for good redundancy and highest IO throughput (RAID 5 keep in mind has to figure the Parity and write it so it is more resource intensive than RAID 10 which stripes the data and duplicates to the mirrored RAID set).

  • Difficulty is that it is difficult to get smaller drives any more. 18Gb is really the smallest.

    I go for

    RAID 1 for OS and other binaries

    RAID 1 for log (maybe Raid 10 if space requires)

    RAID 10 for data.

    at £200 for an 18Gb disk the performance benefit of RAID 10 is huge over RAID 5

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • I think the debate over 0+1 vs. 5 misses an even larger point.

    While your mileage may vary, I think you will find that having LOTS of cache in the raid controllers will often make a lot more difference than the 0+1 vs. 5 question. Especially if you do write-back caching instead of write-thru, it can make a huge benefit.

    For write-back on a database that cannot loose data, be sure you have mirrored cache and UPS's (at least UPS's). There is a risk to data if you loose the raid subsystem while the cache is dirty.

    But even if there's not a high locality of writes, the caching on write can offer a huge benefit in terms of smoothing I/O performance peaks. And if there is a high locality of writes, you gain even more.

    For the cost of a few hundred dollars of raid controllers, you can often get huge benefits easier than tweaking raid configurations.

    But while we're on the subject of configurations, also pay attention to NT cluster size and to raid chunk sizes, make sure that you get an evenly divisible number of NT clusters in a chunk, and an evenly divisible number of SQL pages in a NT cluster.

  • Excellent points Ferguson. Mylex has an Ultra Wide adapter with three channels, up to 128MB cache (32MB standard), that has a battery backup for the cache on the adapter to help eliminate the write cache issue. Throw in a few 10,000 RPM drives. Flat out screams........

  • We've chosen a similar tactic. We don't do enough transactional processing to go RAID 0+1 so we've stuck with RAID 5. The read/write cache on our array controllers are more than sufficient for the job, especially considering that we also buy a lot of memory, meaning > 98 % of the data usually resides in memory anyway.

    We're looking at a system coming down the pipe that may cause us to go to RAID 0+1, but we evaluate each server on a case-by-case basis so we don't have any set rules other than by the hardware to meet the requirements (to include the next couple of years).

    K. Brian Kelley

    http://www.truthsolutions.com/

    Author: Start to Finish Guide to SQL Server Performance Monitoring

    http://www.netimpress.com/shop/product.asp?ProductID=NI-SQL1

    K. Brian Kelley
    @kbriankelley

  • All the discussions above based on the assumption that each raid array is a saperate one. Here the read write calculation is true base on the read/write heads available. Normally in big systems we use some sort of storage area for ex: sanbox/nasbox.

    This box manage all ur hard disks and allow you to configure different raid configurations in one sanbox. Now because these storage boxes are having limited read/write heads say 10. Now if you have 5 raid array each of 10 disks and you are accesing simultaniously all these arrays to it's full potentials, the storage box will use these 10 read/write heads to access 50 hard drives and that leads to bottlenecks and bad performance.

    So if going for storage boxes make sure you consider read/write heads into calculation also.

    Cheers,

    Prakash

    Prakash Heda
    Lead DBA Team - www.sqlfeatures.com
    Video sessions on Performance Tuning and SQL 2012 HA

  • I don't understand, heda_p. None of the conversation has been under the assumption of each raid being a seperate one. What made you think that?

    Quite a few of our raid arrays are contained in a EMC Rack. Drive heads are part of the disks themselves and not affected by the enclosure or system. If, as in your example of 10 heads per disk, we were accessing 50 drives, we would be accessing 500 read/write heads on distinct platters. The very idea of raid, and adding drives to an array, is based on the idea of including more heads and platters to write or read more data in parrallel at the same time.

    Perhaps you are speaking of channels?

  • quote:


    Perhaps you are speaking of channels?


    Channels are indeed something worthy of note, in relation to caching as well.

    We just put in some Dell PV220's. We split them so that you had half the shelves on one channel and half on the other -- this requires twice the raid controllers, but gives you twice the paths to disk as well, and more parallelism.

    We actually went one step further. We then bought SEPARATE PERC3 raid controllers for them, not (necessarily) expecting more performance per channel by leaving one channel unusued (I think they come with 2 regardless), but because now we had twice the cache covering the same number of drives. While twice the size cache, shared, is better -- once you max out on these smallish controllers that's it. But buying more controllers is not very expensive and gains then twice everything (except perhaps PCI bus access which you might then run short of).

    Salesmen tend to configure expensive stuff, but they also tend to think in terms of controllers as plumbing -- if there's enough slots to hook everything up that's all you need. With raid controlls hooking up the max disks to each channel, and all the channels, may make the raid controller itself the bottleneck.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply