adding 2nd instance to existing 2005 Cluster

  • Hi Folks,

    I currently have a 2005 Cluster with a single default instance. I would like to add a second (named) instance to the cluster. I do not have additional disk resources, so I'd basically have to use the same disks. Since only one node is going to own the disks at a time, I'm guessing this means that an active/active configuration is not going to work for me.

    Is there any guide or white paper that describes adding a named instance to an existing cluster? I'm thinking it would be pretty straightforward, but the existing cluster is hosting some production databases so I'm a little nervous about just doing it. We don't have a development cluster to play around with...

    Thanks in advance for any help you can provide...

  • You can have more than one instance on the clustered resources however don't try it out for the first time on your production server.

    if you have hardware you can setup virtual servers with you can create a virtual clustered environment using the starwind software. You can also get a free trial of the software so you don't need to buy a copy to do your testing.

    MCITP SQL 2005, MCSA SQL 2012

  • everything I've read indicates that a second instance must have it's own resource disks. Where were you able to find information indicating otherwise? Any documentation supporting that configuration?

  • Sorry I don't have access to how we set it up before as it was with another company I worked for. To be fair out network enginner did most of the legwork to get it working.

    MCITP SQL 2005, MCSA SQL 2012

  • Hi NJ,

    A second clustered instance is definitely going to call for a second clustered disk resource, completely separate to the first disk used by the existing instance.

  • NJ-DBA (7/15/2010)


    Hi Folks,

    I currently have a 2005 Cluster with a single default instance. I would like to add a second (named) instance to the cluster. I do not have additional disk resources, so I'd basically have to use the same disks. Since only one node is going to own the disks at a time, I'm guessing this means that an active/active configuration is not going to work for me.

    Is there any guide or white paper that describes adding a named instance to an existing cluster? I'm thinking it would be pretty straightforward, but the existing cluster is hosting some production databases so I'm a little nervous about just doing it. We don't have a development cluster to play around with...

    Thanks in advance for any help you can provide...

    Hey NJ,

    The simple answer is this cannot be done.

    Each SQL cluster, that will sit on top of the OS cluster you created 1st, must have it's own resources.

    They will include IP Address, Network Name, and disks. You cannot have 2 resources that share disks, it doesn't work that way, Windows 2003 clustering would do all kinds of bad things before it died a horrible death (I'll give an example below), Windows 2008 clustering wouldn't pass a valididation check and SQL would not even install (2008 and above), 2005 may try to install but I think it would die an equally horrible death.

    I had a DBA that set up a cluster once and somehow added the Quorum drive to the OS cluster, but then mistakely added it as the data drive to the SQL Resource. (I'm not sure how they did this, but the drive letter said M and when I looked at the cluster resource drive properties it was pointed to the Q drive). Failover of one caused a fail over of both. SQL got internally confused. I tried to do unistalls but because the resource was in both locations it became a really bad mess and we had to re-image and rebuild the cluster, fortunately it was only a dev cluster this would have been a nightmare on prod.

    One point of clarification, an Active/Active Cluster is one where each cluster owns resources at the same time. If you have your OS Clustered resources living on Node A and your SQL clustered resources living on Node B you technically have an Active/Active cluster.

    The reason named instances are normally considered Active/Active is that you have a named instance with it's own set of drives, IP Address, and Network name that be placed as owned by a Node seperate from another instance.

    if it were a 2 node cluster and you had one node go offline, then one Node would own both instances, so you want to make sure you have servers that can support both instances if you only have 2 nodes per cluster

    Good Luck!

  • Thanks for all the input- I think I have to tell them: Want another instance, buy another shelf on the SAN... which basically means there isn't going to be another instance.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply