Log and Data placement on a SAN.

  • 3 separate LUNs, each with its own partition. I usually don't create more than 1 partition per LUN.

    As far as arrays, I'm not sure. I don't really have visibility into the actual raid groups/spindle count/physical location.

  • I see, you are using an array solely for the SQL server or not?

    in my case I may have a shared array and that's why I am getting all nitty-gritty

  • Hello,

    Well, we have finally got round to sorting out the LUNs and a thought has occured. Originally, we were going to assign a separate LUN for every instance and both the data and logs would occupy the same instance LUN. Then I reasoned that it would be a better idea to have all the instances data sitting on one larger LUN and all the instances logs located on another large LUN, assuming that both LUNs would occupy different racks, this would surely be more effective in terms of read/write?

    As a side, is it also correct that each SQL 2008 instance on a cluster requires its own IP address? (Just to throw you off!)

    Many thanks,

    D.

  • Yes, each instance needs its own IP address on a cluster.

    Also Yes, having the data files and log files on separate spindles is absolutely the best way to go.

  • Hello,

    Just to be clear, if I have 5 SQL instances on a cluster, I'd need 5 separate drives, each with their own drive letter? 5 SQL instances cannot put their databases on the same drive?

    Regards,

    D.

  • Duran (10/11/2010)


    Hello,

    Just to be clear, if I have 5 SQL instances on a cluster, I'd need 5 separate drives, each with their own drive letter? 5 SQL instances cannot put their databases on the same drive?

    Regards,

    D.

    Each instance requires it's own unique virtual network name, virtual IP address and disk storage. The installer will not allow you to select disk resources that are already part of an existing SQL Server installation!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • The SAN guys should be able to run reports to show you the IO performance of your LUN. If your application is fairly small you can probably get away with it all on a single LUN. If you are going to have hundreds of users hammering away on it, you are going to want to break it out onto separate LUNs with different RAID configurations (like many other posters have already mentioned).

    The important thing is to monitor the LUN and make sure that you aren't having any IO problems. If you aren't in production yet, now is the time to do it right. If the SAN team does not want to do it right, at least ask them for a plan about what it will cost to do it right in the future. Also ask them for a migration plan and how they plan to get your data off of a poorly configured LUN and onto multiple LUNs where it belongs. Hopefully by the time they consider all of the headaches involved with fixing it later, they will decide to do it the right way from the beginning. 😉

  • Hello,

    Yes, I thought so and this is the road I have gone down, I just spoke to someone who told me otherwise and and they were so confident I actually started to doubt myself.

    Anyway, I am know on the cusp of installing the first instance on my cluster, one thing I was curious about is where I should install the root directory, do I install all the root directories on the C: or on a clustered disk? I feel sure its the C:, but could do with an OK on this. And I feel a bit silly for asking, but obviously I only need to install SSMS once as its a shared tool?

    Thank you,

    D.

  • program files cannot be installed on shared drives on a failover cluster instance, so it leaves you either with c:\ or depending on your server configuration you may have a separate mirrored or raided local disk.

  • there is also a bug in the failover cluster install whereby you elect to install shared component files to D:\program files\MSSQL instead of C:

    you go to your next node and carry out the install to add a node but the installer defaults to C:\program files\... and not the D: path you specified on the first node!

    This is fixed in a later CU, i can't remember which one. When i find it i'll post the details, just beware of this gotcha!!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Found it, it's fixed in SP1 CU10. This link[/url] details the issue

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Hello everyone,

    Thank you, I'll keep my eye out for it. I have done further reading but its lead to further questions, there is not anywhere near as much info on Active/Active and there is on Active/Passive so I'm left with the following questions.

    With Active/Active do I need to create 2 MSDTC cluster resources, one for each node? Or is one cluster MSDTC resource enough?

    Where do I assign a prefferred owner for a drive, I cant see to find this option? And is it this part that defines the cluster as Active/Active (When SQL is installed on node B)

    As every SQL instance (needs its own recource group) will have its own drive, do I create a resource group on very drive before installing, or is this done during the SQL install?

    To be honest I think I have read too much and got some information overload.

    Thanks again for all your advice and help.

    Regards,

    D.

  • well the question to ask is really are your SQL Server instances going to use it regularly or sporadically?

    if regularly then follow Perry's advice add them as a resource, map them.

    If not, just create it as a clustered resource than any of your sql server instances can do it.

  • Well, it somewhere inbetween there unfortunalty, I guess I would go do the best practice route for Active/Active if there is one.

    I was curious to know where people leave the installation files so they are available to both node?

    I currently have a problem with the Native SQL Install Bootstrapper not working, I have tried changing the DEP setting to windows only, this did not work, so now I am trying to add setup.exe to it to see if it helps. What a pain in the bum. Some people have got round it using a different drive, I am trying to launch it from the C:, but if it ends up working I'm wondering whether the current location for these files might not be the best place.

    Regards,

    D.

  • Duran (10/14/2010)


    Hello,

    Yes, I thought so and this is the road I have gone down, I just spoke to someone who told me otherwise and and they were so confident I actually started to doubt myself.

    Anyway, I am know on the cusp of installing the first instance on my cluster, one thing I was curious about is where I should install the root directory, do I install all the root directories on the C: or on a clustered disk? I feel sure its the C:, but could do with an OK on this. And I feel a bit silly for asking, but obviously I only need to install SSMS once as its a shared tool?

    Thank you,

    D.

    Halcyon (10/14/2010)


    program files cannot be installed on shared drives on a failover cluster instance, so it leaves you either with c:\ or depending on your server configuration you may have a separate mirrored or raided local disk.

    Yep, I also read in different articles that the Instance root directory needs to be installed on a local disk but I find it a bit strange. The directory also contains the resource database and the idea that every node has it's own resource database doesn't sound logical to me.

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply