For Day 19 of this series, I am going to briefly discuss hardware RAID controllers, also known as disk array controllers. Here is what Wikipedia has to say about RAID controllers:
A disk array controller is a device which manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache.
Figure 1 shows a typical hardware RAID controller.
Figure 1: Typical Hardware RAID Controller
For database server use (with recent vintage servers), you usually have an embedded hardware RAID controller on the motherboard, that is used for your internal SAS, SATA, or SCSI drives. It is pretty standard practice to have two internal drives in a RAID 1 array, controlled by the embedded RAID controller, that are used to host the operating system and the SQL Server binaries (for standalone SQL Server instances). This gives you a somewhat better level of redundancy against losing a single drive and going down.
If you are using Direct Attached Storage (DAS), you will also have one or more (preferably at least two) hardware RAID controller cards that will look similar to what you see in Figure 1. These cards go into an available PCI-e expansion slot in your server, and then are connected by a relatively short cable to an external storage enclosure (such as you see in Figure 2).
Figure 2: Dell PowerVault MD1220 Direct Attached Storage Array
Each direct attached storage array will have anywhere from 14 to 24 drives. The RAID controller(s) are used to build and manage RAID arrays from these available drives, which eventually are presented to Windows as logical drives, usually with drive letters. For example, you could create a RAID 10 array with 16 drives and another RAID 10 array with eight drives from a single 24 drive direct attached storage array. These two RAID arrays would be presented to Windows, and show up as say the L: drive and the R: drive.
Enterprise level RAID controllers usually have some cache memory on the card itself. This cache memory can be used to cache reads or to cache writes, or split between both. For SQL Server OLTP workloads, it is a standard best practice to devote your cache memory entirely to write caching. You can also choose between write-back and write-through cache policies for your controller cache. Write-back caching provides better performance, but there is a slight risk of having data in the cache that has not been written to the disk if the server fails. That is why it is very important to have a battery backed cache if you decide to use write-back caching.