I’ve often described SQL Server to people new to databases as a data pump.
Just like a water pump, you have limited capacity to move water in or out of a system usually measured in gallons per hour.
If you want to upgrade your pumping systems it can be a two fold process, the physical pump and the size of the pipes.
Our database servers also have several pumps and pipes, and in general you are only as fast as your slowest or narrowest pipe, hard drives.
To feed other parts of the system we have resorted to adding lots and lots of hard drives to get the desired IO read/writes and MB/sec throughput that a single server can consume.
Everyone is familiar with Moore’s law (Often quoted, rarely understood) loosely applied says CPU transistor counts double roughly every 24 months. Hard disks haven’t come close to keeping up with that pace, performance wise.
Up until recently, hard drive capacity has been growing almost at the same rate doubling in size around every 18 months (Kryder’s Law). The problem isn’t size is speed.
Lets compare the technology from what may have been some folks first computer to the cutting edge of today.
Time | Circa 1981 | Today | improvement |
Capacity | 10MB | 1470MB | 147x |
HDD Seeks | 85ms/seek | 3.3ms/seek | 20x |
IO/Sec | 11.4 IO/Sec | 303 IO/Sec | 26x |
HDD Throughput | 5mbit/sec | 1000mbit/sec | 200x |
CPU Speed | 8088 4.77Mhz (.33 MIPS) | Core i7 965(18322 MIPS) | 5521x |
*These are theoretical maximums in the real world you mileage may vary.
I think you can see where this is going. I won’t go any further down memory lane lets just say that some things haven’t advanced as fast as others. As capacity has increased the speed has been constrained by the fact hard disks are just that, spinning disks.
So, what does this little chart have anything to do with SSD? I wanted you to get a feel of where the real problem lies. It isn’t capacity of hard drives it’s the ability to get to the data quickly. Seeks are the key. SSD’s have finally crossed a boundary where they are cheap enough and fast enough to make it into the enterprise space at all levels.
SSD compared to today’s best 15k.2 HDD from above.
HDD | SSD | improvement | |
seek times | 3.3ms/seek | 85µs/seek | 388x |
IO/Sec | 303 IO/Sec | 35000 IO/Sec | 115x |
Throughput | 1000mbit/sec | 25000mbit/sec | 2.5x |
So, in the last few years SSD has caught up and passed HDD on the performance front by a large margin. This is comparing a 2.5” HDD to a 2.5” SSD. This gap is even wider if you look at the new generation of SSD’s that plug directly into the PCIe bus and bypass the drive cage and RAID controller all together. HOT DOG! Now we are on track. SSD has allowed us to scale much closer to the CPU than anything storage wise we have seen in a very long time.
Since this is a fairly new emerging technology I often see allot of confused faces when talking about SSD. What is in the technology and why it has now become cost effective to deploy it instead of large raid arrays?
Once you take out the spinning disks, the memory and IO controller march much more to the tune of Moore’s law than Kryder’s meaning cost goes down, capacity AND speed go up. Eventually there will be an intersection where some kind of solid state memory, maybe NAND maybe not, will reach parity with spinning hard drives.
But, like hard drives not all SSD’s are on the same playing field, just because it has SSD printed on it doesn’t make it a slam dunk to buy.
Lets take a look at two implementations of SSD based on MLC NAND. I know some of you will be saying why not SLC? I’m doing this to get a better apples to apples comparison and to put this budget wise squarely in the realm of possibility.
Intel x25-M priced at 750.00 for 160GB in a 2.5” SATA 3.0 form factor and the Fusion-io IoDrive Duo 640GB model priced at 9849.99 in a PCIe 8x single card.
Drive | Capacity in GB | Write Bandwidth | Read Bandwidth | Reads/sec | Writes/Sec | Access Latency (seek time) | Wear Leveling (writes-erase/day) | Cost per Unit | Cost per GB | Cost per IO Reads | Cost Per IO Writes |
IoDrive Duo | 640 | 1000MB | 1400MB | 126601 | 180530 | 80µs | 5TB | $9849.99 | $15.39 | $0.08 | $0.06 |
X25-M | 160 | 70MB | 250MB | 35000 | 3300 | 85µs | 100GB * | $750.00 | $4.60 | $0.02 | $0.22 |
Improvement | 4x | 14x | 5x | 4x | 55x | ~ | 10x | 13x | 4x | 4x | -4x |
* This is an estimate based on this article http://techreport.com/articles.x/15433. Intel has stated the drive should be good for at least 1 petabyte in write operations or 10,000 cycles.
Both of these drives use similar approaches to achieve the speed an IO numbers.They break up the NAND into multiple channels like a very small RAID array. This is an over simplification but gives you an idea of how things are changing. It is almost like having a bunch of small drives crammed into a single physical drive shell with it’s own controller a mini-array if you will.
So, not all drives are created equal. In Intel’s defense they don’t plan the X25-M to be an enterprise drive, they would push you to their X25-E which is an SLC based NAND device which is more robust in every way. But keeping things equal is what I am after today.
To get the X25-M to the same performance levels it could take as few as 4 drives and as many as 55 depending on the IO numbers you are trying to match on the IoDrive Duo.
Wear leveling is my biggest concern on NAND based SSD’s. We are charting new water and really won’t know what the reliability numbers are until the market is aged another 24 to 36 months. You can measure your current system to see how much writing you actually do to disk and get a rough estimate on the longevity of the SSD. Almost all of them are geared for 3 to 5 years of usability until the croak.
At a minimum it would take 10 X25-M drives to equal the stated longevity of a single IoDrive Duo.
Things also start to level out once you factor in RAID controllers and external enclosures if you are going to overflow the internal bays on the server. That can easily add another $3000.00 to $5000.00 dollars to the price. All the sudden the IoDrive Duo really starts looking more appealing by the minute.
What does all this mean?
Not all SSD’s are created equal. Being constrained to SATA/SAS bus and drive form factors can also be a real limiting factor. If you break that mold the benefits are dramatic.
Even with Fusion-io’s cost per unit it, is still pretty cost effective in some situations like write heavy OLTP systems, over other solutions out there.
I didn’t even bother to touch on something like Texas Memory System’s RamSan devices at $275000.00 for 512GB of usable space in a 4U rack mount device the cost per GB or IO is just through the roof and hard to justify for 99% of most SQL Server users.
You need to look closely at the numbers, do in house testing and make sure you understand your current IO needs before you jump off and buy something like this. It may be good to also look at leveraging SSD in conjunction with your current storage by only moving data that requires this level of performance to keep cost down.
If this article has shown you anything it’s technology marches on. In the next 6 to 12 months there will be a few more choices on the market for large SSD’s in the 512GB to 2TB range by different manufacturers at ranging prices making the choice to move to SSD even easier.
Recently, Microsoft Research in early April published a paper where they examined SSD and enterprise workloads. They don’t cover SQL Server explicitly but they do talk about Exchange. The conclusion is pretty much SSD is too expensive to bother with right now. To agree and disagree with them it was true several months ago, today not so much.
The fact that the landscape has change significantly since this was published and will continue to do so, I think we are on the verge of why not use SSD instead of do we really need it.
With that said please, do your homework before settling on a vendor or SSD solution, it will pay dividends in not having to explain to your boss that the money invested was wasted dollars.
A little light reading for you:
SSD Primer
http://en.wikipedia.org/wiki/Solid-state_drive
James Hamilton’s Blog
http://perspectives.mvdirona.com/2009/04/12/WhereSSDsDontMakeSenseInServerApplications.aspx
-Wes