December 28, 2006 at 8:06 am
We are having I/O issues with our current configuration and were originally looking to build a SAN solution. Due to price they are opting for a cheaper NAS solution with iSCSI devices. Each has a head unit similar to a SAN and the head unit has an on-board cache with flash ROM. Has Anyone had success/failure or other issues with this type of setup?
TIA,
DAB
December 28, 2006 at 9:39 am
a NAS generally will run across ethernet as against fibre which typically a storage network would use - you have slower speed and you may fight for bandwidth as your network won't be dedicated ( typically ) for storage access so you could suffer timeouts or poor performance.
I personally wouldn't like to use a NAS with SQL Server, I believe MS may have certified some NAS devices for sql server - might be worth a look. NAS and "SAN" work differently and a SAN isn't a chunk of hardware it's a fabric, being pedantic. You do know that you can use direct attached fc storage don't you ? ( you don't have to have a "SAN" as such ) Sun have a nice fc storage unit, starts around $10k
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 28, 2006 at 11:24 am
if you have IO issues cheapest way to fix them is to add new drives and split the database
add multiple RAID1 volumes and break up the db files and make sure your log files are on separate physical disks. you may also want to partition your tables among the different volumes as well
December 28, 2006 at 12:49 pm
Thank you Colin. Our specific issue is that we are pushing over 800 IO Operations/Sec with the current configuration and it is only rated at 400 and we can not add additional I/O channels.
We purchased an EMC at my last job so I'm familliar with it's overall configuration. I also am aware of networked storage with SQL Server. I am trying to be the ""glum" now and point out vulnerablilities with the prospected soultion.
As it's been explained to me, the "NAS" will have 3 different head units and communicate with the server via a dedicated TCP/IP port between the server and head units. The controller will choose the TPC channel with the least traffic then write to that head unit which is mirrored across the other two head units. In addition, each head unit contains a ROM Cache. In the event of a failure it is retained in the ROM and written to disk when the head unit is available.
December 28, 2006 at 1:55 pm
The NAS of today is much better than the NAS of yesterday. That may sound corny, but it is very true. Like you, I have worked with SANs for some time, using several of the larger vendor's products, and I have to admit there are many "NAS" products out there that perform very well. It is probably better to refer to them as IP Storage, but that verbage depends on your vendor.
The important thing is to make sure that whatever solution you put in will meet the I/O and latency requirements of your environment. Based on the numbers you have posted above, I think you'll find that an IP storage solution can be more than adequate.
I used to lobby against SANs in a SQL environment, as SANs generally do not perform as well as direct attached storage in a single server environment. But who has only one server now? I used to lobby against installing NAS units for SQL environments, as they did not have the I/O, Latency or additional features of a SAN. But what IP storage solution doesn't match up to SANs these days?
Just be sure they (the bean counters) don't keep you from spending $$ where it needs to be spent. While not required, a segmented storage network is a good idea and will usually still cost less than FC switches. Network controllers that offload the iSCSI from the system are key. Not much point in increasing disk performance if you crawl the processor with protocol conversion overhead.
All that said, have fun! =) It's an exciting change to an environment.
-John
December 28, 2006 at 2:26 pm
Thanks John, that really helps.
December 29, 2006 at 6:16 am
We had a Dell Powervault directly attached to our two-node sql 2005 Enterprise cluster ( active/passive ). It seemed to work fine housing just the two live database files of our production DB ( mdf,ldf). We have since moved that to a SAN and tried to use the Powervault for backups -- it worked but the backups took over 4 times as long to complete compared to when they were on the SAN.
I'm hoping our systems team can determine why that was the case.
Randy
December 29, 2006 at 9:47 am
Randy, we are currently attached to a poser vault. Unfortunately, that is wehre we are getting our I/O bottlenecks.
December 29, 2006 at 10:41 am
"poser vault" ... LOL
Unfortunately, PowerVault can refer to a few different products. The external, direct attached SCSI options can provide very good performance, but as with any direct attached you'll probably want more than one on different controllers.
Now, if you are using the PowerVault "NAS" head units, well in my experience "poser vault" also applies. Even if only file serving...
December 29, 2006 at 2:11 pm
I guess you'll have to explain "poser vault" to me. don't get it 🙂 But I'm pretty sure ours is the direct attached scsi variety, and again, worked for live DB files but very slow for backups.
January 2, 2007 at 2:45 pm
800 i/o sec isn't that much, is this the average, peak, proportion reads to writes ? you'd only need 4 spindles to support that , well you could get away with 3 at a push. with 8k io.s thats only 6mb/sec .. a 2gb HBA can handle burst rates of 200mb/sec and the average pci bus 5 times that. You're probably just short of spindles.
I always recommend the sql 2000 performance tuning guide as a starting point to understand all this, it will explain throughput, disks, i/o's etc. Most external storage arrays will work well, I usually find they're often not configured very well, same for SAN's.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 2, 2007 at 2:50 pm
sorry, you say your system is only rated for 400 i/o and you're pushing 800 - that doesn't really make sense, the i/o limits are down to the physical disk and raid limitations - and to be honest if you had a 400 hundred limit you wouldn't be able to push 800 anyway, so how would you know? If you're using raid 5 then I could sort of understand if your system is oltp - sorry ( again ) but you're not really making a lot of sense.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 3, 2007 at 9:09 am
Colin, you nailed it your second to last post. We are short spindles and we can't add more.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply