November 27, 2013 at 12:23 pm
We are finally setting up everything for a massive migration of all 2005 systems to 2012 with a "rollover" upgrade to 2014 when (likely SP1) comes out.
As part of this massive upgrade, we're also upgrading the data center with mostly all new hardware. The folks in NetOps are considering either Fusion-IO or Nimble storage. Here are the main parts of each...
Fusion IO
ioControl n5-100 Storage System, 1,570 GB Solid-State, 32 TB Disk, (4) 10GbE and (8) 1 GbE Data Ports, (2) 1GbE Management Ports, ioControl Operating Environment
Nimble
Nimble CS460G 36TB Raw, 1.2TB Flash Cache, 2x10GigE, Hi Perf Dual Controllers
I would imagine that a number of you good folks have done such a thing in the last year or so, so I thought I'd ask... [font="Arial Black"]are there any "oolies" or "gotchas" that you'd be willing to share about either of these systems especially in reference to SQL Sever 2012 and 2014? Any recommendation of one over the other?[/font]
Thanks an awful lot for any info on this. This is all new for our entire team and we can certainly use any advice from those that have bee through it before.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 2, 2013 at 12:44 am
Any thoughts on this, folks? Thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 2, 2013 at 12:00 pm
Few days I was asked to test the nimble storage for sql server as we are planning to buy nimble in large quantity
Here are the SQLIO test results on CS460 for the read/Write on a 25GB file.
using system counter for latency timings, 14318180 counts per second
8 threads writing for 120 secs to file D:\TestFile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 8 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: D:\TestFile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 11492.27
MBs/sec: 89.78
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 5
Max_Latency(ms): 219
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 1 9 60 14 2 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 3
C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS D:\Tes
tFile.dat
sqlio v1.5.SG
using system counter for latency timings, 14318180 counts per second
8 threads reading for 120 secs from file D:\TestFile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 8 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: D:\TestFile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 4950.88
MBs/sec: 38.67
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 12
Max_Latency(ms): 771
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 1 4 9 13 12 10 7 5 4 3 3 2 2 2 2 2 1 1 1 1 1 1 11
C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS D
:\TestFile.dat
sqlio v1.5.SG
using system counter for latency timings, 14318180 counts per second
8 threads writing for 120 secs to file D:\TestFile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 8 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: D:\TestFile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 3572.72
MBs/sec: 223.29
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 17
Max_Latency(ms): 95
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 0 0 1 3 4 6 6 6 6 5 5 5 7 4 3 2 2 2 1 1 1 30
C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS D
:\TestFile.dat
sqlio v1.5.SG
using system counter for latency timings, 14318180 counts per second
8 threads reading for 120 secs from file D:\TestFile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 8 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: D:\TestFile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 6886.41
MBs/sec: 430.40
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 8
Max_Latency(ms): 475
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 1 2 11 7 5 7 13 26 16 5 3 1 1 0 0 0 0 0 0 0 0 0 1
The IOPS and MBPS are pretty good for our environment compared current netapp storage.
December 2, 2013 at 1:21 pm
@SQLFRNDZ,
Thanks for the legup! That'll help a lot.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 3, 2013 at 9:16 am
The two 10 GBe ports make the nimble more attractive in my eyes.
December 4, 2013 at 7:29 am
alexandar_narayan (12/3/2013)
The two 10 GBe ports make the nimble more attractive in my eyes.
Thanks, Alexandar.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 4, 2013 at 8:51 am
I think this thread is just about answered but I wanted to add that I think a lot of the things in SQL 2014 are going to be really powerful, though, I am still trying to learn more about it.
As for the flash array what is the queue depth of most of your reads or writes? How will you be implementing it? As a tempDB?
Best case scanario is the two competing vendors would give you a unit to test, have IT or someone configure it, and then run the queries you would run normally and see what is faster, but, I understand that's not always possible.
edit: I misread, the fusion-io board has 10GBe also. And ports for just management, it looks like a better fit. IT will like it better.
December 4, 2013 at 4:10 pm
alexandar_narayan (12/4/2013)
I think this thread is just about answered but I wanted to add that I think a lot of the things in SQL 2014 are going to be really powerful, though, I am still trying to learn more about it.As for the flash array what is the queue depth of most of your reads or writes? How will you be implementing it? As a tempDB?
Best case scanario is the two competing vendors would give you a unit to test, have IT or someone configure it, and then run the queries you would run normally and see what is faster, but, I understand that's not always possible.
edit: I misread, the fusion-io board has 10GBe also. And ports for just management, it looks like a better fit. IT will like it better.
Actually, it's no where near answered especially since there's so little about the comparison between the two products. The reason why I'm asking this question is because I'm trying to avoid having to do the very testing that you suggest.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 16, 2013 at 11:04 am
December 16, 2013 at 12:22 pm
@SQLFRNDZ (12/16/2013)
ofcourse., there is no where near comparision between 2 fusioio is pure flshbased and nimble is hybrid solution where it use flash only for caching and rest on traditional disks.
Now that's the kind of ifo I was looking for. I've never worked with either before and that really helps. Thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 16, 2013 at 12:32 pm
Both are hybrid systems. They use the 1.5 or 1.3 TB of flash as a cache. Not sure if it's a read only cache, or write and reads, but both are caches hence the TBs of spindle storage.
If nothing else the fusion-io box has much more connectivity, 8 1GBe ports which can be bonded, ample 10gbe ports, and more flash to cache stuff with. That's the one I'd choose.
December 17, 2013 at 9:46 am
I've not worked with the Fusion array, but I've worked (and still work) with a lot of Nimbles. Quick bullet points from my experience:
1) Write performance is incredible (latency almost always <1 ms) as it acks the writes as soon as they hit memory (which has some redundancy in case of power loss), as I understand it. As an example, one of our more write-heavy clients on the Nimble regularly pushes 5-7k write IOPS, and the array stays <1 ms latency for the writes.
2) The biggest pain point we've encountered was with undersizing the cache. Since they only come with 12 SATA disks on the back end, if IO gets pushed to disk the performance can take a nasty nose dive (physical only CHECKDBs are especially brutal, although there's not much avoiding that unless you're on SQL Server 2012 or 2014, which it sounds like you are moving to).
The takeaway here is that it's probably best to get as much as cache as you can.
There are a few more unformed thoughts floating around in my head, but I'm only 1 cup of coffee into the day, so I may have more information once my brain is fully online.
Cheers!
December 17, 2013 at 10:16 pm
Thanks folks. From the sounds of it, both products are great but have slightly different applications. I'll pass this on to the boys in NetOPs. I really appreciate the information.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 28, 2013 at 4:53 pm
Just replying to the post that said the Fusion-io solution was all flash --- Fusion-io offers both hybrid and all flash arrays. ioControl is the Fusion-io hybrid array using both flash and disk drives (ioControl used to be "NexGen" before Fusion-io purchased them). ION Data Accelerator is the Fusion-io all flash array. ioControl competes with Nimble in the hybrid space whereas the ION Data Accelerator is geared for raw performance (think crazy high performance, even > 1M IOPS) without as many storage services. When you look at ioControl architecture and features versus the Nimble hybrid array, ioControl is geared more for performance using PCIe connected eMLC flash (Fusion-io ioDrives) versus Nimble which uses cheaper non-PCIe intel SSDs (Nimble speeds data by sequentializing random writes in non-volatile memory and then writing that data as large blocks stripes across an array of disk drives as well as writing sequentially to some cheaper non-PCIe intel SSDs). ioControl on the other hand uses higher performing PCIe attached flash to directly accelerate reads and writes (it also does I/O dedupe) and allows you to set minimum IOPs and latency (performance) by application. I'm sure there are some things Nimble does that ioControl doesn't (Nimble does data dedupe versus ioControl doing I/O dedupe), but from a performance perspective ioControl will definitely be faster. Either solution will be faster than comparable hybrid approaches from legacy vendors like NetApp or EMC.
Another add on worth looking at is ioTurbine Direct and Virtual which provides a server side caching solution (bare metal and virtual for VMWare environments) that will work with either ioControl or Nimble storage. I have a customer that deployed ioTurbine (Fusion-io product) and saw a ~50% offload from their storage within a day.
December 29, 2013 at 11:42 am
chrishoward515 (12/28/2013)
Just replying to the post that said the Fusion-io solution was all flash --- Fusion-io offers both hybrid and all flash arrays. ioControl is the Fusion-io hybrid array using both flash and disk drives (ioControl used to be "NexGen" before Fusion-io purchased them). ION Data Accelerator is the Fusion-io all flash array. ioControl competes with Nimble in the hybrid space whereas the ION Data Accelerator is geared for raw performance (think crazy high performance, even > 1M IOPS) without as many storage services. When you look at ioControl architecture and features versus the Nimble hybrid array, ioControl is geared more for performance using PCIe connected eMLC flash (Fusion-io ioDrives) versus Nimble which uses cheaper non-PCIe intel SSDs (Nimble speeds data by sequentializing random writes in non-volatile memory and then writing that data as large blocks stripes across an array of disk drives as well as writing sequentially to some cheaper non-PCIe intel SSDs). ioControl on the other hand uses higher performing PCIe attached flash to directly accelerate reads and writes (it also does I/O dedupe) and allows you to set minimum IOPs and latency (performance) by application. I'm sure there are some things Nimble does that ioControl doesn't (Nimble does data dedupe versus ioControl doing I/O dedupe), but from a performance perspective ioControl will definitely be faster. Either solution will be faster than comparable hybrid approaches from legacy vendors like NetApp or EMC.Another add on worth looking at is ioTurbine Direct and Virtual which provides a server side caching solution (bare metal and virtual for VMWare environments) that will work with either ioControl or Nimble storage. I have a customer that deployed ioTurbine (Fusion-io product) and saw a ~50% offload from their storage within a day.
Thanks for the info, Chris. And, "Welcome Aboard".
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply