August 31, 2010 at 3:14 am
Hello everyone.
I hope somebody can use their experience with SAN's to help me answer a question. I have a SAN, and the LUNS stripe right across the whole rack of disks, so in other words I cannot use 3 physical disks (or spindles) for Data and 2 for logs.
So I was wondering if there is any point in separating the logs and the data on different logical drives for a given instance?
In relation to this, if I have several SQL instances running on a node, is there any performance difference in whether I store all the databases and logs from all the named instances on the same (be it larger) LUN? Or would it still be worth me splitting up the data and logs from each named instance to their own LUN?
Apologies if I have not been very clear, I'll gladly supply more info if needed.
Thank you,
D.
August 31, 2010 at 10:50 pm
Is it not possible for you to reconfigure the disks at this stage?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 1, 2010 at 3:53 am
Hi Duran,
It is always recommended to put the data & log files on separate drives. Usually we put data file on RAID 5 or RAID 1, and log file on RAID 1+0 or RAID 1. Distributing the files on multiple drives helps in avoiding the IO issues in future.
-Sujeet
September 1, 2010 at 3:58 am
Duran (8/31/2010)
Hello everyone.I hope somebody can use their experience with SAN's to help me answer a question. I have a SAN, and the LUNS stripe right across the whole rack of disks, so in other words I cannot use 3 physical disks (or spindles) for Data and 2 for logs.
So I was wondering if there is any point in separating the logs and the data on different logical drives for a given instance?
In relation to this, if I have several SQL instances running on a node, is there any performance difference in whether I store all the databases and logs from all the named instances on the same (be it larger) LUN? Or would it still be worth me splitting up the data and logs from each named instance to their own LUN?
Apologies if I have not been very clear, I'll gladly supply more info if needed.
Thank you,
D.
This is a very regular question when it comes to using SAN's for SQL Server and causes a lot of confusion.
As you have indicated, your storage is being stripped across all the disk in the SAN, thus should provide a great deal of I/O in general. There is more steps, which take place at the SAN level prior to presenting you with LUN's and is where the difference is key. You san team would have created a number of RAID Groups with varying RAID levels and binded a LUN to a RAID Group. This will also be impacted by the number of controllers on the array, as you would want a RAID Group split between controllers.
In term of SQL Server, data file access is random and log file access is sequential, thus two different access patterens and its best practice to keep these differing access patterens logically seperated. What you normally would request from SAN team are 3 LUN's to support Data, Log and TempDB files with appropriate spindle count and RAID level based on your application data access requirements.
So to answer your question, you need to ensure your differing data access needs are being met by the SAN layout and ideally using seperate RAID groups for isolating data from transaction log access and have ample spindles to provide the needed I/O.
Hope this helps.
Phillip Cox
September 1, 2010 at 5:33 am
Thank you for your reply's, that basically clears it up for me.
Regards,
D.
September 1, 2010 at 5:45 am
Just to throw a spanner in the works, from personal experience, a lot of this goes out of the window when working on something like a NetApp Filer, where shelves of disks are split into Aggregates, then the LUN's are carved from this Aggregate. In this case the LUNs are striped across all the disks in the Aggregate (along with all the other LUNs sliced from this Aggregate), so you have no guarantee of how many spindles or where your data will be at any given time. The propriatory NetApp software floats the LUNs around on the disks within an Aggregate trying to avoid I/O hotspots etc.
In this case we're running the Data and Logs from LUNs sliced from the same Aggregate with no underlying issues at all, but when we tried to split one of the larger databases into filegroups and give each of these a LUN in the Aggregate we found that performance actually went down as the NetApp software occasionally put the files into contention with each other when trying to float the LUN.
So it can also depend on what you are calling a SAN...:crazy:
September 1, 2010 at 8:31 am
Andeavour (9/1/2010)
So it can also depend on what you are calling a SAN...:crazy:
very true. I too have experience of Netapp filers and as you say if the disks are configured into one aggregate the LUNs are simply sliced out of the array and presented to the client. The aggregate also defaults to RAID6 if i remember correctly!
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 2, 2010 at 4:03 am
Andeavour,
Thats pretty much where I am to be honest, but as the DB's are not hammered night and day, I think things will work out fine.
Regards,
D.
September 2, 2010 at 1:53 pm
Can you get data and logs on different racks, if each rack is a stripe set/raidgroup?
Basically, have a meeting with the SAN folks and explain what your big operations (historical queries, bulk reporting index maintenance) are going to do, as well as whatever SLA's you have to meet.
How much your DB affects others, and vice versa, is very important in being able to meet SLA's, or having to tell the stakeholders "No, we can't guarantee that anymore."
Then talk about the configuration from the DB level, to the OS level drive letter level, to the MetaLUN level, to the LUN level, to the transport layer level (HBA's, iSCSI, FC, the switches, etc.) to the aggregate/raidgroup level, to the physical disk level.
September 13, 2010 at 8:47 am
Sorry for highjacking the thread.
Can I use a local machine drive for the temp db? and get the data & log files on SAN?
Well, I cannot 🙁
September 13, 2010 at 9:12 am
Please start a new thread and explain your issue better.
September 14, 2010 at 10:30 am
Halcyon (9/13/2010)
Sorry for highjacking the thread.Can I use a local machine drive for the temp db? and get the data & log files on SAN?
Well, I cannot 🙁
Nope..can't use local drives for storage on a cluster (which I assume this is), even for tempdb.
I've dealt with a lot of servers that have a raid-5 SAN backend and I have no visibility as to how many spindles there are, which ones are where, etc. Typically unless I was seeing a severe issue with IO waits, I didn't dig too much into it because it was handled by another team.
I do typically try to carve things up so that logs are on one logical drive, data on another (large databases get their own LUN, typically 250gb+), tempdb/master/msdb on another.
September 14, 2010 at 11:02 am
hi, yes I agree that it is a good way of doing it, but if you have the LUNS arranged over one array of disks, then isn't it defying the purpose of having separate LUNS?
September 14, 2010 at 11:08 am
Halcyon (9/14/2010)
hi, yes I agree that it is a good way of doing it, but if you have the LUNS arranged over one array of disks, then isn't it defying the purpose of having separate LUNS?
Basically, yes.
Due to conflicting reasoning, I went with the safest approach. Some of our systems guys say it's better because in addition to physical disk queueing, there is also logical disk queueing if the load is high enough, so having separate drives can alleviate that. Not sure if that is true since I don't know if there's even a way to measure it, but it definitely doesn't hurt to split it up into more logical drives.
September 14, 2010 at 11:58 am
Do you mean that you have went for one array partittioned into 3 LUNS or 3 arrays partitioned into 3 LUNS?
Viewing 15 posts - 1 through 15 (of 31 total)
You must be logged in to reply to this topic. Login to reply