November 22, 2012 at 6:42 pm
Excellent info....
thanks
GAJ
Gregory A Jackson MBA, CSM
November 22, 2012 at 11:51 pm
It appears that this article has been changed (or at least republished). Perry Whittle's comment should have been taken into consideration, the difference between RAID 1+0 and 0+1 is indeed not academic. RAID 1+0 is a stripe of mirrors, if one disk fails, that mirror has only one disk left, but all the other mirrors may still lose a disk each, and your RAID array is still available. RAID 0+1 is a mirror of stripes, if one disk fails, that whole stripe is available, and you're left with a singe stripe. If one of the disks in this stripe fails, your RAID array is not available anymore.
As far as I understand it, the physical storage of RAID 1+0 and 0+1 are identical, the difference lays in how these data are used logically (or internally if you wish) by the RAID controller.
November 23, 2012 at 1:19 pm
btd (5/1/2012)
I do agree a well put together document BUT I wish we could get away from the interpretation of RAID. 'inexpensive discs'... The 'I' certainly did not mean inexpensive discs when RAID first came out way back when...'Independent' ... why does the computer world keep changing things when it does not need to, we have plenty of change without it...... and as for 'tables' instead of files.........
My personal recollection is that the "I" did stand for inexpensive. Prior to the days of cheap, large, fast, hard disks one of the key metrics was MTBF (Mean Time Between Failures). In those days a lot of the data was stored "off-line" on tape. When a job ran the operator would mount the tape and the data would be transfered to disk as working storage (sort of a cache).
Disk drive failures on early main frame and mini computers were fairly common. Disks with high MTBF were very expensive. Early RAID configurations used the relatively inexepensive Winchester architecture drives to construct fault-tolerant solutions without breaking the bank on expensive high MTBF drives. At a time when CPU cycles were measured in milliseconds, the added disk write penalties did not seem so bad.
November 23, 2012 at 5:11 pm
T.C.
I'm not sure I understand your question completely.....care to rephrase or expand?
GAJ
Gregory A Jackson MBA, CSM
November 24, 2012 at 3:09 am
The differences between RAID 1+0 and 0+1 are not academic. Let's just review that again, not academic!!!
If you do decide to deploy a 0+1 array for your mission critical data and you're unlucky enough to encounter a drive failure in each array you'll realise just how important the differences are.
As you're sitting there typing your resume you'll have time to reflect on the storage decision you made 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 24, 2012 at 8:43 am
Perry Whittle (11/24/2012)
The differences between RAID 1+0 and 0+1 are not academic. Let's just review that again, not academic!!!If you do decide to deploy a 0+1 array for your mission critical data and you're unlucky enough to encounter a drive failure in each array you'll realise just how important the differences are.
As you're sitting there typing your resume you'll have time to reflect on the storage decision you made 😉
+1
November 24, 2012 at 5:22 pm
this article is just being republished.....sounds like I need to update it and publish again....
amazing what you learn from others eh?
GAJ
Gregory A Jackson MBA, CSM
December 24, 2012 at 11:54 am
Nice article.
May 16, 2014 at 6:37 am
I see serious problems with taking a RAID chart from IT guys and accepting it as being applicable in any way to SQL Server. SQL Server requires almost the exact opposite of what IT likes to implement for their file servers. The RAID chart says RAID 1 (mirror) is only "good" at both reading and writing when RAID 1 is the fastest for general SQL Server apps. What I've found is sql server needs a setup optimized for 8k random reads and writes and you can and should ignore virtually all other disk ratings. An SSD will perform between 10x and 100x better than mechanical disks. Hands down, RAID 1 will perform the best for SQL Server but most of my customers have chosen the worst setup of using RAID 5 and then blame me when their RAID is 4x slower than my laptop performance of no RAID.
Actually, having many RAID 1 arrays is the best solution. Having many separate I/O channels will dramatically speed up SQL Server.
May 16, 2014 at 6:59 am
Only getting a couple of hundred IOps per 15K drive isn't really accurate because in reality we don't do a fully random workload across the entire disk.
I wrote the blog post here a few weeks ago because I got fed up with SAN / Storage engineers I couldn't possibly get the number of IOps I am getting out of the kit at one of my clients: http://dataidol.com/tonyrogerson/2014/04/07/maximum-iops-for-a-10k-or-15k-sas-hard-disk-drive-is-not-170/
For a 100% read work load of 64KiB on a 20GiB file on a pair of 300GB 15K disks in RAID 1 with a Queue Depth of 32 I easily get 2,281 IOps. For 8KiB I get 14K!
It really does depend on what you are doing, if you are doing sequential scans and you've performed the correct defrag maintenance on your tables, set them up correctly the disk head doesn't need to move too far so IOps goes up - dramatically.
Hope that helps.
May 16, 2014 at 8:15 am
I might recommend looking at my post earlier in this thread, as I list a table of SQLIO result from actual systems in a variety of RAID setups, both spindle and SSD disks, over different IO sizes and IO queue depths.
If you want to know the difference between RAIDx and RAIDy on your particular setup, I would very much recommend running an exhaustive SQLIO set over the metrics appropriate to your workload; it's not unknown to see a particular setup have one or another odd quirk for a specific data transfer type at a given RAID level (and/or stripe size); keep your firmware, drivers, SAN controllers, etc. updated.
In some older hardware, I sometimes saw a cap on writes that was very unexpected to me.
May 16, 2014 at 10:14 am
Hello,
regarding Raid configuration you also have to count in the time/cost of a rebuild.
If your smallest unit is 1 mirrored drive the controller only needs to read and write the content of 1 drive to rebuild the mirror.
If you have let's say a 4 drive raid 5 configuration the controller has to read the content of 3 disks to rebuild the raid array.
Now consider a SAN box having a raid 5 or a raid 0 with say 20 drives...
Reading the content of 19 drives to rebuild the array - how long will the array be degraded? When will the next drive fail?
Best regards
karl
May 29, 2014 at 12:09 pm
Why is the formula to determine how long it takes to perform a single IOP = Seek Time + Rotational Latency?
Wouldn't you be seeking at the same time that you are rotating? Thereby making it the max of one or the other?
May 29, 2014 at 12:20 pm
No.
generally speaking the disk spins to the correct sector before the actuator moves the head to its location.
GAJ
Gregory A Jackson MBA, CSM
Viewing 15 posts - 76 through 90 (of 95 total)
You must be logged in to reply to this topic. Login to reply