May 9, 2024 at 12:18 pm
Hello,
I have a question regarding Availability group server architecture.
A little background: We want to convert our 20 FCI instances to Availability groups. Those instances will be converted by moving all databases from one FCI instance to one standalone instance (which is on one virtual machine). That means 20 VMs for primary nodes and 20 VMs for secondary nodes. In the end we are looking at 40 VMs with 20 AG listeners.
Our mindset for choosing this is that that will isolate instances between one another, we could delegate server resources based on each VM pair requirenments and because separate teams work on seperate instances they could have full control of what other required programs and services are installed and running on that VM pair (where now we are not that flexible because all instances in FCIs are on one of the two servers we have in the cluster).
Now our question is, because AGs need to be inside the cluster, do we need to create 10 WSFC clusters that will each have one pair of VMs as its nodes, or can we create one WSFC cluster and join all our nodes to the cluster. What happens with HA and DR if we choose the second option, if it is even an option?
May 9, 2024 at 7:47 pm
FCIs can be replicas in an AG but can’t convert to stand alone.
The maximum number of nodes for a WSFC is 64 so you could deploy one WSFC with 40 nodes it all depends on your requirements.
You could reduce the number of nodes and replicate multiple stand alone instances to a single secondary but again depends on requirements
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
May 10, 2024 at 5:37 am
Thanks for the reply. Maybe I unnecessarily complicated the question. I wanted to ask if i have a group of servers: server1, server2, server3 and server4, where server1 and server2 are members of AG1 and server3 and server4 are members of AG2. However server4 and server5 doesn't have any AGs with server1 and server2. Can all 4 servers be in the same cluster OR is in this case neccessary to create seperate clusters like: "cluster1" for AG1 and another cluster "cluster2" for AG2?
May 11, 2024 at 3:13 pm
An availability group is defined in SQL Server - so you can have a single windows cluster of xx nodes hosting as many instances of SQL Server that you need. Each node could host a single instance of SQL Server - or multiple named instances - with databases in those instances configured in an AG across 2 or more of the instances in the cluster.
I would question the reasoning between moving an instance from the FCI to this new cluster. You will be increasing your costs since each node in the new cluster will need to be fully licensed - and you will be doubling the storage requirements.
It sounds like you want to be able to use these servers for more than just SQL Server. That would be a mistake - especially in an AG environment where you would have to ensure that anything installed on node1 is also installed on all other nodes that could be the primary node for that specific AG.
Once you go down that path, then every node in the cluster will not be the same and that could lead to issues with the applications that are much harder to trace down.
And further - you now have to be very careful about how quorum votes are configured and how you patch the servers. If you restart too many nodes at any given time you will cause the cluster to shut down - impacting all instances. Before you can even begin to patch the servers you need to identify what nodes are currently the primary for each instance - patch the secondary - failover - patch the old primary - and then fail back (just to ensure your instances are always hosted on a defined primary).
If you really want to be able to separate these - then individual 2-node clusters for each instance would be the better option in my opinion.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 11, 2024 at 5:26 pm
Thanks for the reply. Maybe I unnecessarily complicated the question. I wanted to ask if i have a group of servers: server1, server2, server3 and server4, where server1 and server2 are members of AG1 and server3 and server4 are members of AG2. However server4 and server5 doesn't have any AGs with server1 and server2. Can all 4 servers be in the same cluster OR is in this case neccessary to create seperate clusters like: "cluster1" for AG1 and another cluster "cluster2" for AG2?
where does server5 come in? Only 4 nodes not 5?
Having all the servers in one WSFC is not an issue and has a central management point rather than multiple smaller clusters.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
May 11, 2024 at 5:36 pm
You will be increasing your costs since each node in the new cluster will need to be fully licensed
nodes in a WSFC with AGs follow the same rules in that as long as the relevant licence agreement and SA exists and you’re not employing readable secondaries or secondary backup offload and failover is for no longer than a 28 day period (which covers you for patching), these nodes do not require licensing
- and you will be doubling the storage requirements.
that is the down side of stand alone instances and AGs
And further - you now have to be very careful about how quorum votes are configured and how you patch the servers. If you restart too many nodes at any given time you will cause the cluster to shut down - impacting all instances. Before you can even begin to patch the servers you need to identify what nodes are currently the primary for each instance - patch the secondary - failover - patch the old primary - and then fail back (just to ensure your instances are always hosted on a defined primary).
since the introduction of Windows 2012 the dynamic voting comfortably handles a windows node shutting down and adjusts the voting accordingly.
as the instances are standalone they don’t go offline when cluster services go offline but the AG resources would.
when patching identify all secondaries and patch them then reboot.
failover to the patched nodes and rinse repeat
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
May 12, 2024 at 9:42 am
This was removed by the editor as SPAM
May 12, 2024 at 9:42 am
This was removed by the editor as SPAM
May 13, 2024 at 7:40 am
Sorry it was a typo. I meant server3 and server4 aren't in the same AG as server1 and server2.
Thanks for the answer:
"Having all the servers in one WSFC is not an issue and has a central management point rather than multiple smaller clusters."
Ok awesome, I was afraid of having unrealated servers (servers participating in different AGs) would have some impact on failovers and HA/DR.
May 13, 2024 at 10:52 am
Server 1-4 will all have one instance each or multiple instances?
Think about your design carefully and document it, create a POC environment if you need (and recommended).
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply