November 28, 2011 at 10:48 pm
Comments posted to this topic are about the item SQL 2008 R2 Cluster Setup: Easy? Not So Much.
November 28, 2011 at 10:53 pm
Very informative. Thank you.
I do have a question. I am planning to upgrade a two-node cluster from SQL 2008 to SQL 2008 R2.
My plan was to tear upgrade the passive node, failover, upgrade the 2nd node. I have not come across the problem you described, but I am assuming that there would be certain pitfalls to my approach. Have you had any experience with a similar situation?
Thanks!!
November 28, 2011 at 11:30 pm
I don't think your upgrade path will work as described. As I read it you are doing a version upgrade of an existing instance on the passive node then failing over to do the previous active node. I don't think this is supported. You could add another instance to the passive node of the new SQL Version but it would have a different name and you would have some trickiness with the database files..
CEWII
November 29, 2011 at 12:15 am
I have not upgraded a Cluster to SQL2008R2 yet. I should upgrade a SQL 2005 to SQL 2008R2 in the next two months and my plan is to:
1.Full system backup backup of the passive node
2.Upgrade SQL in the passive node
3.Take a last Full backup of databases in their 2005 version
4.Failover from SQL 2005 (active) node to upgraded SQL2008R2 node
5.DBA Check / User Acceptance Checks
6.Upgrade the now passive SQL2005 Node
7.Take a full backup of the databases
8.Failover to the newly upgraded node
9.DBA Check / User Acceptance Checks
November 29, 2011 at 6:05 am
November 29, 2011 at 7:51 am
Thanks Calvo! The goal was to share an experience and try to easy someone's life if they should get into these issues. I'll try to share some more in the next weeks.
November 29, 2011 at 8:08 am
Nice post, thanks for sharing.
November 29, 2011 at 11:51 pm
I want to a create 3 node cluster.Two node is easy to create but how will I add 3rd node.
Please explain in detail.
Thanks
November 30, 2011 at 1:18 am
From a cluster/SQL setup point of view, adding a third node is not very complicated: you join the new node to the existing cluster and you add a SQL Server instance to an existing cluster. So it's more or less just like adding a second node again.
If you want to create a 3 node cluster it is probably because you want to spare some money in hardware and to have two active nodes that might both failover on the third node. This a correct solution if your SLA states that in case of failover of both active nodes you may have degraded performance.
The challenge here is to define carefully how to share the resources (RAM + CPU) in case you find yourself with, let's say five OLTP instances and two OLAP instances, suddenly running on the same box called Node 3. In the normal situation the 5 OLTP run on Node 1 and you defined how they should share 64GB RAM + 8 cores between them; the two OLAP instances usually run on Node 2 and share (even if it is more difficult to define memory allocation in SSAS) 64GB and 8 cores.
When all that stuff fails over node 3 and there are less than 128GB + 16 cores available on that box, you may have some resource contention problems, right?
In addition to this you might want to prepare all possible scenarios, like all nodes failing on node 1, etc.
The more active nodes you add, the more complexity you get.
November 30, 2011 at 1:31 am
Thanks Faeni..
Thanks
November 30, 2011 at 9:52 am
Thanks for sharing. I will treasure it and keep it handy in case I ever need it.
Paulino PP
November 30, 2011 at 1:21 pm
Fabrizio Faleni (11/30/2011)
@forsqlserverFrom a cluster/SQL setup point of view, adding a third node is not very complicated: you join the new node to the existing cluster and you add a SQL Server instance to an existing cluster. So it's more or less just like adding a second node again.
If you want to create a 3 node cluster it is probably because you want to spare some money in hardware and to have two active nodes that might both failover on the third node. This a correct solution if your SLA states that in case of failover of both active nodes you may have degraded performance.
The challenge here is to define carefully how to share the resources (RAM + CPU) in case you find yourself with, let's say five OLTP instances and two OLAP instances, suddenly running on the same box called Node 3. In the normal situation the 5 OLTP run on Node 1 and you defined how they should share 64GB RAM + 8 cores between them; the two OLAP instances usually run on Node 2 and share (even if it is more difficult to define memory allocation in SSAS) 64GB and 8 cores.
When all that stuff fails over node 3 and there are less than 128GB + 16 cores available on that box, you may have some resource contention problems, right?
In addition to this you might want to prepare all possible scenarios, like all nodes failing on node 1, etc.
The more active nodes you add, the more complexity you get.
I think its important to remember that in most Quorum configurations you can only lose 1 machine and still have the cluster running. There is one Quorum configuration where you can sustain a loss of all but one machine. In all the other modes you need to have 50.1% of the nodes and quorum available, in other words 2 nodes w/quorum disk you can lose 1 node, 3 nodes w OR wo/Quorum you can lose 1 node, 4 nodes wo/Quorum you can lose 1 node, 4 nodes w/Quorum you can lose 2 nodes. This is assuming quorum configurations "Node and Disk Majority" or "Node and File Share Majority" with the first being the most common configuration and the latter no used often. Node Majority would be appropriate for a 3 node cluster, and the remaining configuration is "No Majority" which I'm not sold on using.
So without some tweaking you could never have all the services on 1 node..
CEWII
November 30, 2011 at 1:51 pm
Correct. I overlooked that point, thanks for precising that. That's one more reason to avoid 3 node SQL clusters!
November 30, 2011 at 2:07 pm
I'm not sure I fully agree. In general what are the chances of having 2 nodes offline at the same time? You should have a chance to either evict and replace the bad node or get it restarted, either way I can see many cases where I could prefer to go 3-4 nodes instead of os 2 x 2 Node clusters. But I think 4 is about the most I would go, its hard enough keeping 2 sync'd up..
CEWII
November 30, 2011 at 9:32 pm
prowlermax (11/28/2011)
Very informative. Thank you.I do have a question. I am planning to upgrade a two-node cluster from SQL 2008 to SQL 2008 R2.
My plan was to tear upgrade the passive node, failover, upgrade the 2nd node. I have not come across the problem you described, but I am assuming that there would be certain pitfalls to my approach. Have you had any experience with a similar situation?
Thanks!!
Hi,
Here are some notes I took for doing sql 2008 sp1 to 2008 r2. I had an Active/Active configuration, so 2 named instances.
1.Put both instances on the same node. In my example I will move them to cln1
2.On the passive node (cln2), start the r2 setup. Select upgrade and select inst1 and shared components to upgrade. It will probably ask you to reboot once finished. Do so as you can't do inst2 upgrade if there is a pending reboot as the check will fail
3.Once inst1 is finished, upgrade inst2.
4.At this point, cln2 should be upgraded to sql 2008 r2. Both instance 1 and 2 cannot failover to these nodes as this node is left out of the possible owners group. Get on cln1 and start the upgrade for inst1 and the shared components. It will want to reboot more than likely, so do so. Inst2 will try and failover to cln2, but it will be in an offline state as that node is not a possible owner for inst2
5.Move inst2 back to cln1 and bring it online. Start the upgrade for inst2. It will failover automatically. Reboot if asked.
Done this on a few clusters with no issues. You could just 1 instance at a time, but I like to get both instances out of the way. A little more downtime, but thats what maintenance windows are for. good luck
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply