February 1, 2017 at 9:36 pm
Comments posted to this topic are about the item Preferred Node Policy-Setting on Traditional SQL Cluster
Prakash B
February 2, 2017 at 12:44 am
Probably the worst piece of advice I've heard in a long time.
And set it for all sql cluster groups? ??
This is useful in a file server or print server cluster but certainly not for sql server and can result in random, unplanned outages.
My advice to anyone is do not follow this advice, also the article is based on MSCS and not WSFC which is a major issue in terms of product features and operation.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 2, 2017 at 1:47 am
I've worked with clusters before that are set up like this. I can appreciate why someone might think it's a good idea, but it's not so good when you're trying to figure out why your cluster failed over at 3am.
I think the biggest problem is how long it takes to stop a service and start it on the other node, if it was instant (or close to) it wouldn't be a problem. Though with ~30 large databases I've seen it take up to 15 minutes, all of which is downtime.
February 2, 2017 at 6:54 am
This setting does facilitate load balancing the resources on a traditional active-active cluster, hosting multiple SQL instances. After host patching you don't want to find out that all the instances are running on one of the node on an active-active cluster and that instances are fighting for resources.
Scenario:
2 Node Active-Active cluster. Each node has 30 GB RAM.
6 instances with max memory of 8 GB configured on each instance.
3 instances are running on Node1 and the other 3 instances are running on Node2. No issues reported.
Host patching takes place and instances are failed over from one node to another. for example, all the six instances are running on Node1.
Issue reported that SQL is very slow. Total Max memory on 6 instances = 48 GB. Max memory on host = 30 GB.
In order to balance resouces on an active-active cluster, this is proven to be helpful.
Prakash B
February 2, 2017 at 7:12 am
GOWDA7900 - Thursday, February 2, 2017 6:54 AMThis setting does facilitate load balancing the resources on a traditional active-active cluster, hosting multiple SQL instances. After host patching you don't want to find out that all the instances are running on one of the node on an active-active cluster and that instances are fighting for resources.Scenario:
2 Node Active-Active cluster. Each node has 30 GB RAM.
6 instances with max memory of 8 GB configured on each instance.
3 instances are running on Node1 and the other 3 instances are running on Node2. No issues reported.
Host patching takes place and instances are failed over from one node to another. for example, all the six instances are running on Node1.
Issue reported that SQL is very slow. Total Max memory on 6 instances = 48 GB. Max memory on host = 30 GB.
In order to balance resouces on an active-active cluster, this is proven to be helpful.
I agree that, ideally, you wouldn't want every instance running on a single node. However, in this scenario it seems the cluster has not been configured correctly for HA. If you can't run all your services on a single node then you don't have high availability. Surely that is the main point of having a cluster?
I'd also add that if I was patching a cluster, I would bring the services on the cluster offline and online manually so that I can monitor the status post-upgrade. It does seem that your scenario involves unattended patching, which I've rarely seen go well.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply