April 8, 2010 at 12:57 pm
johnzabroski (4/8/2010)
That strategy is working out well and giving us some head room to plan for our next round of servers where we can properly plan and size them with Fusion-IO in mind. Also, they have also announced an updated driver 1.3 that is due out soon. It reduces the memory consumption by as much as 10 fold over the current driver. With that and a 4KB block size memory pressure should be a thing of the past for them.
What is your source for "it reduces the memory consumption by as much as 10 fold over the current driver"? Are you trusting the vendor to deliver? Or have you beta-tested the driver to confirm their statements are true?
I'm currently testing a beta version now.
April 8, 2010 at 1:00 pm
Just to be clear, does that mean you will post a benchmark confirming the 10x increase reported by the vendor? I can understand if the vendor has license restrictions on the beta driver prohibiting benchmarking during the beta testing period, and what not... I just want a clear description of the exact status of this order of magnitude performance increase.
April 8, 2010 at 1:03 pm
johnzabroski (4/8/2010)
Just to be clear, does that mean you will post a benchmark confirming the 10x increase reported by the vendor? I can understand if the vendor has license restrictions on the beta driver prohibiting benchmarking during the beta testing period, and what not... I just want a clear description of the exact status of this order of magnitude performance increase.
I will be posting a follow up. This is a 10x decrease in memory usage not in speed up. When we get closer to the RTM I'm sure Fusion-IO will be fine with me updating the article to reflect the new firmware/driver changes.
April 8, 2010 at 2:36 pm
Has anyone tested the INTEL X-25M SATA 160G 2.5" Internal SSD drives with SQL Server?
April 8, 2010 at 4:32 pm
I actually use 4 Intel 160GB X25M in a Raid 10 for our production Sql Server. They simply fly. I haven't seen a wait time on a read or write greater than 2ms. I wanted to just Raid0, but I couldn't bring myself to do this on a Production server.
To decide a replacement schedule for the SSDs I:
Amount of space I could use (space): 320GB (4x160GB / 2 for Mirror)
Average amount of writes per day (writeRate): 32GB writes/Day
Average life expectancy of an MLC chip (life): 1000 writes ( I like to be conservative, other documentation shows 10,000 writes)
To figure out how long they will last: (life x space) / writeRate
This means ( 1,000 x 320GB ) / 32GB = 10,000 days or about 27 years
However, since technology is always changing, I plan on replacing one drive every 9 Months. The next drives I get will have Garbage Collection like the Vertex or Summit Series. Trim is not Raid Supported, and after about half a year, my drives are going to start to slow down when writing because Intel doesn't go and clean up deleted sectors until they need to be written again which causes a 3x penalty on writes.
April 8, 2010 at 7:14 pm
Nicely done, Wes! Quite informative.
Kevin Martin
April 8, 2010 at 7:41 pm
newjcb (4/8/2010)
I actually use 4 Intel 160GB X25M in a Raid 10 for our production Sql Server. They simply fly. I haven't seen a wait time on a read or write greater than 2ms. I wanted to just Raid0, but I couldn't bring myself to do this on a Production server.To decide a replacement schedule for the SSDs I:
Amount of space I could use (space): 320GB (4x160GB / 2 for Mirror)
Average amount of writes per day (writeRate): 32GB writes/Day
Average life expectancy of an MLC chip (life): 1000 writes ( I like to be conservative, other documentation shows 10,000 writes)
To figure out how long they will last: (life x space) / writeRate
This means ( 1,000 x 320GB ) / 32GB = 10,000 days or about 27 years
However, since technology is always changing, I plan on replacing one drive every 9 Months. The next drives I get will have Garbage Collection like the Vertex or Summit Series. Trim is not Raid Supported, and after about half a year, my drives are going to start to slow down when writing because Intel doesn't go and clean up deleted sectors until they need to be written again which causes a 3x penalty on writes.
With the INTEL drives did you have the same memory overhead as what is described by Wes in this article?
I'm trying to determine the scope of this additional memory requirement. In otherwords is it an issue for all SSD's or is it for only SSD's using a PCI configuration or is it only specific to the Fusion DUO card.
April 8, 2010 at 10:01 pm
Kevin Martin (4/8/2010)
Nicely done, Wes! Quite informative.Kevin Martin
Thanks Kevin!
April 8, 2010 at 10:03 pm
With the INTEL drives did you have the same memory overhead as what is described by Wes in this article?
I'm trying to determine the scope of this additional memory requirement. In otherwords is it an issue for all SSD's or is it for only SSD's using a PCI configuration or is it only specific to the Fusion DUO card.
No you won't. The memory issue is specific to the Fusion-IO.
April 8, 2010 at 10:31 pm
personally if I was going to do it again, I would get ocz summit 128gb drives times 5 and do a raid 5. Don't forget how important a good raid controller. The ocz summits have 128mb cache and built in gc. I have found it to be reliable in my desktop and performance always stays up. With 500gb over 5 drives I'd be happy. I think the speed of the ssds combined with a smart raid controller make raid 5 a great option for databases that are even write heavy. If you only let the cache be used for writes, you should keep quite a bit of throughput while maximizing your investment in the ssds.
April 9, 2010 at 1:22 am
This is the reply I received from my contact at Fusion IO
"The sector size can be changed between 512(default) and 4k (4096) (not 8k). I think this is only supported in Win 2008 as when I tried it in 2003 Windows would not recognise the drives.
It is currently a registry change and a manual low level format however in the next driver version it is going to be part of the standard GUI. Formatting to 4096 is better for SQL. However some application still cannot work with the larger sizes.
This also means the file system format has to be at least 4k however you could formation to 8,16,32 or 64k. The low level format at 4k is mainly beneficial for memory usage and may give some performance gains but it would be configuration dependant."
April 9, 2010 at 7:43 am
ian-707364 (4/9/2010)
This is the reply I received from my contact at Fusion IO"The sector size can be changed between 512(default) and 4k (4096) (not 8k). I think this is only supported in Win 2008 as when I tried it in 2003 Windows would not recognise the drives.
It is currently a registry change and a manual low level format however in the next driver version it is going to be part of the standard GUI. Formatting to 4096 is better for SQL. However some application still cannot work with the larger sizes.
This also means the file system format has to be at least 4k however you could formation to 8,16,32 or 64k. The low level format at 4k is mainly beneficial for memory usage and may give some performance gains but it would be configuration dependant."
In the 1.2.7 driver there is a registry setting to change the sector size. In the 2.0 driver it is in the GUI. Last time I looked you could go to 8k but NTFS and windows doesn't support it. Some Linux file systems do though. The registry change is just to cut down on memory usage by forcing a larger minimum write size.
April 9, 2010 at 10:14 am
Hi Kim,
free performance tip: long time ago I discovered that when you FORMAT your -In Memory- RAMDRIVE to 64 KB NTFS instead of using the default 4KB blocksize (run chkdsk to check!) before placing tempdb on it, the throughput triples 😉
April 9, 2010 at 10:28 am
Besides these entry level plug-in SSD cards, for IO demanding SQL Production environments I would recommend products like the DSI3600 units; (http://www.dynamicsolutions.com/dsi3600)
- Up to 5TB storage capacity,
- 3Gigabyte/sec bandwidth and up to 250.000 IOPS each!
You can connect them to regular 4Gbit HBA ports with up to 8 fibers each using the Windows MPIO for multipath if you want. I blogged on the test results of this high End SQL SSD device already some months ago : http://henkvandervalk.com/dsi-3500-ramsan-500-solid-state-storage-to-the-test
and what's even more fun... how to increase SQL bulk Insert speed, SQL table scan speed & SQL backup speed !
Regards,
Henk
April 9, 2010 at 10:31 am
reply to ian-707364:
"A solid state SAN won't come close - maximum bandwidth per slot is 4Gbps (500MB/s). Plus you have the added latency from the fabric switches, etc. But the real SAN-SSD killer is cost - it's too prohibitive for a dedicated IO subsystem. "
Ian, this isn't true.. per DSI or RAMSAN unit you can connect up to 8 fibers to each !
== 250000 random 8 KB IOs (actually mine delivers to 257000 ...) and 3100MByte/sec on 64 KB Random IOs... What's in a number 🙂
Viewing 15 posts - 31 through 45 (of 63 total)
You must be logged in to reply to this topic. Login to reply