December 29, 2014 at 12:28 pm
We are building architecture to have 2 FCIs set up in a HAG.
Setup:
Hypervisor = VMWARE
OS = Windows 2012 R2
SQL = SQL Server 2012 Enterprise Ed. (of course)
SAN = EMC VNX
WSFC_newApp_legacyApp:
newAppExistingFCI:
WSFC Node 1 = EXISTING01
WSFC Node 2 = EXISTING02
LegacyAppNewFCI:
WSFC Node 3 = NEWREADONLY01
WSFC Node 4 = NEWREADONLY02
ClusterAG:
HAGListener
I am being told that the storage for newAppExistingFCI and legacyAppNewFCI need to be presented to ALL nodes. Is this true? I am trying to understand why... Also, in case anyone needs use case, legacyAppNewFCI will be read only and will NEVER be failed over to as primary. The purpose of this setup is strictly read-only mirroring.
Jared
CE - Microsoft
December 30, 2014 at 12:56 am
Hi,
As mentioned in this article(http://msdn.microsoft.com/en-us/library/ff929171.aspx), AlwaysOn Availability Groups does not depend on any form of shared storage. However, if you use a SQL Server failover cluster instance (FCI) to host one or more availability replicas, each of those FCIs will require shared storage as per standard SQL Server failover cluster instance installation.
A replica that is hosted by an FCI uses a shared storage solution as required by that FCI. The storage solution is shared only by nodes within the FCI and not between replicas of the availability group.
In addition, SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability replica that is hosted by an FCI can only be configured for manual failover.
You can also check the blog(http://www.mssqltips.com/sqlservertip/3150/adding-sql-server-alwayson-availability-groups-to-existing-failover-clusters/) about how to setup the whole scenario.
December 30, 2014 at 2:37 am
SQLKnowItAll (12/29/2014)
We are building architecture to have 2 FCIs set up in a HAG.Setup:
Hypervisor = VMWARE
OS = SQL 2012 R2
SQL = SQL Server 2012 Enterprise Ed. (of course)
SAN = EMC VNX
The smart choice there is the use of Windows 2012 R2
SQLKnowItAll (12/29/2014)
ManagementCluster:ClusterA:
Node 1 = DBA01
Node 2 = DBA02
ClusterB:
Node 3 = DBARPT01
Node 4 = DBARPT02
ClusterAG:
HAGListener
Now we come to the not so smart and i'll detail why below.
SQLKnowItAll (12/29/2014)
I am being told that the storage for ClusterA and ClusterB need to be presented to ALL nodes. Is this true?
Total rubbish. The only time you'll expose the shared storage for an FCI to all nodes in a cluster is if you actually intend to install the FCI on all nodes in the cluster.
SQLKnowItAll (12/29/2014)
Also, in case anyone needs use case, ClusterB will be read only and will NEVER be failed over to as primary. The purpose of this setup is strictly read-only mirroring.
A standard AlwaysOn Availability group configuration requires that all nodes that will host a replica (FCI or standalone) must be a part of the same Windows Server Failover Cluster, this is a requirement not an option.
The scenario you have depicted is only supported for migration purposes.
On the other hand, if you truly do not intend to ever failover to cluster B, then it should never be an issue. However, in my experience the initial requirements are always departed and the goal posts moved. When this happens you're facing a system redesign\rebuild.
You also need to consider what will happen when both clusters are acting independently. Your read only routing setup will specify a replica that's part of a separate WSFC, what happens when you're directing readintent connections and clusterB is going through a failover scenario?
You're deploying AlwaysOn groups that have no reliance on shared storage and then immediately thowing in multiple FCIs which do have a heavy reliance on shared storage, I'm glad i'm not the one who's financing this scenario.
My advice would be to go back to the drawing board and review the options available to you.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
December 30, 2014 at 5:36 am
Hi Perry,
Let me clarify, as I believe I didn't do such a good job. The OS is Windows Server 2012R2 (Not SQL lol). My bad... Edited to make clearer above. Also, all 4 nodes are part of the same WSFC. (also edited to make clearer)
Do your comments still apply? I.e., In order for the wsfc to see the storage for the 2 nodes on 1 SQL FCI, do the others have to have it presented and just not use it? Maybe that's the part I don't understand.
Also, since I know you have experience here... Is this the right way to do this?
Basically we have 2 applications. One is legacy and the other is new. newApp completely uses the current SQL FCI as its reporting/OLTP database. It is not completely rolled out to all "branches." legacyApp now needs to read this data, but they don't want its read operations to interfere with the OLTP operations until they really optimize their code (at least they realize it needs optimizing... Entity Framework). However, they still need High Availability on the read side. So, this was my suggestion to avoid Transactional Replication. What do you think?
Jared
CE - Microsoft
December 30, 2014 at 5:50 am
SQLKnowItAll (12/30/2014)
Also, all 4 nodes are part of the same WSFC. (also edited to make clearer)
ok, that wasn't clear before
SQLKnowItAll (12/30/2014)
Do your comments still apply? I.e., In order for the wsfc to see the storage for the 2 nodes on 1 SQL FCI, do the others have to have it presented and just not use it? Maybe that's the part I don't understand.
Comments, regarding the presentation of the storage?
Yes, it's perfectly valid to
SQLKnowItAll (12/30/2014)
Also, since I know you have experience here... Is this the right way to do this?
I wouldn't class it as the right way to do it, it differs with each environment. First questions are
SQLKnowItAll (12/30/2014)
Basically we have 2 applications. One is legacy and the other is new. newApp completely uses the current SQL FCI as its reporting/OLTP database. It is not completely rolled out to all "branches." legacyApp now needs to read this data, but they don't want its read operations to interfere with the OLTP operations until they really optimize their code (at least they realize it needs optimizing... Entity Framework). However, they still need High Availability on the read side. So, this was my suggestion to avoid Transactional Replication. What do you think?
I see no point in creating an FCI for a reporting database, but that's just me, a single standalone instance as a replica should suffice.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
December 30, 2014 at 9:16 am
Hi Perry,
It will not be multi-site, quorum will be a disk since it is not multi-site, and the only reason it will be FCI on the reporting side is because it is not just "reporting" but information to the other branches that needs to be highly available for the legacy application. Basically, since there is no integration between the legacy and new systems, this gives the branches still using legacy information as to what has happened with the new system. Its not a typical reporting server.
All that being said, do you have any further thoughts?
Jared
CE - Microsoft
December 31, 2014 at 3:03 am
SQLKnowItAll (12/30/2014)
All that being said, do you have any further thoughts?
Yes, if your cluster was a 3 node cluster you wouldnt need a disk based quorum, Majority Node Set would be used.
You could have a standalone replica on a 3rd node and the FCI on Nodes 1 and 2. Readonly routing would direct read connections to the standalone replica and the FCI would be specified as a fallback read replica, which in theory would only ever be hit for very short periods of time. The time it takes to migrate the VM or deploy a new one. You'll also place less requirements on the host server as you only require 3 VMs and not 4
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
January 26, 2015 at 11:55 am
Not to beat a dead horse... but does anyone see any issues with the quorum disk moving between nodes of this 4 node cluster where only nodes A and B will house InstanceA and only nodes C and D will house InstanceB. I can't think of any strange circumstances offhand, but would like any input.
Jared
CE - Microsoft
January 27, 2015 at 11:59 am
I will chime in with my thoughts here to add to what Perry has said... If you need three instances of SQL (one for the newApp OLTP, one for newApp read-only intent, and one for legacyApp) in an HA config, it might be best to go with a 4-node FCI with three instances of SQL. The storage for each FCI would be presented to each node of the cluster.
If you only need the separation of newApp OLTP and newApp read-only intent, a 3-node, 2-FCI cluster would probably be a good way to go.
In the environment I'm in, we are only doing two different FCI's within the same WSFC when we are splitting between two data centers (ie dc1_node_a, dc1_node_b with dc1_storage and dc1 FCI, and likewise for dc2). That gives us HA in one data center with the DR component of swapping between data centers. In you scenario, I don't think the level of complexity in setting that up best meets you need.
January 27, 2015 at 12:06 pm
does anyone see any issues with the quorum disk moving between nodes of this 4 node cluster where only nodes A and B will house InstanceA and only nodes C and D will house InstanceB. I can't think of any strange circumstances offhand, but would like any input.
I don't think so. The goal of the quorum disk in a "typical" cluster config is to maintain the WSFC quorum (4 nodes + 1 disk = 5 votes, so you could potentially lose 2 nodes and still maintain quorum). If you choose to only install FCI1 (InstanceA) on nodes A and B and FCI2 (InstanceB) on nodes C, and D, that should work. Personally, I would install InstanceA and InstanceB three nodes, and save a server. Node A can host instanceA primarily, and node B can host instanceB primarily, and node C can be a failover option for either instance.
The quorum in Windows Server 2012 R2 (maybe 2012 as well) has a nice feature to automatically adjust the quorum to keep availability up as well. So, with that, I think you could actually drop to two votes and still have a quorum before the WSFC went down.
January 28, 2015 at 3:27 am
S. Kusen (1/27/2015)
The storage for each FCI would be presented to each node of the cluster.
No, see above. To recap, you'll only usually unmask the shared storage to nodes where you actually want to install the FCI and just because you may have 4 nodes in your cluster you wouldn't necessarily install the FCI across all nodes.
S. Kusen (1/27/2015)
If you only need the separation of newApp OLTP and newApp read-only intent, a 3-node, 2-FCI cluster would probably be a good way to go.
Again, no. The OP is planning to utilise AlwaysOn Availablility groups which are designed to remove the shared storage dependency. By introducing FCIs into the group you lose all automatic failover ability. More on restrictions of an FCI in an AO group can be read here
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
January 28, 2015 at 3:29 am
SQLKnowItAll (1/26/2015)
Not to beat a dead horse... but does anyone see any issues with the quorum disk moving between nodes of this 4 node cluster where only nodes A and B will house InstanceA and only nodes C and D will house InstanceB. I can't think of any strange circumstances offhand, but would like any input.
No, the quorum disk would be the only shared storage item that would be exposed to all nodes regardless of where FCIs are to be installed. The key goal is to stop relying on shared storage, either design the cluster appropriately so that it relies on Majority Node Set or use a file share witness
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
January 28, 2015 at 3:33 am
S. Kusen (1/27/2015)
The quorum in Windows Server 2012 R2 (maybe 2012 as well) has a nice feature to automatically adjust the quorum to keep availability up as well. So, with that, I think you could actually drop to two votes and still have a quorum before the WSFC went down.
In Windows 2012 R2 the quorum has been enhanced to provide the capability you describe. Note, however, that even with this it is possible to completely lose quorum if nodes fail or shutdown suddenly\ungracefully.
See my stairway level at this link[/url] for more detail on cluster quorum
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
January 28, 2015 at 7:29 am
Perry Whittle (1/28/2015)
S. Kusen (1/27/2015)
The storage for each FCI would be presented to each node of the cluster.No, see above. To recap, you'll only usually unmask the shared storage to nodes where you actually want to install the FCI and just because you may have 4 nodes in your cluster you wouldn't necessarily install the FCI across all nodes.
I knew that was possible. In the solution I had proposed, I was suggesting to install each FCI on each node for optimal failover capability (multi-node failures).
S. Kusen (1/27/2015)
If you only need the separation of newApp OLTP and newApp read-only intent, a 3-node, 2-FCI cluster would probably be a good way to go.
Again, no. The OP is planning to utilise AlwaysOn Availablility groups which are designed to remove the shared storage dependency. By introducing FCIs into the group you lose all automatic failover ability. More on restrictions of an FCI in an AO group can be read here[/quote]
My apologies on this. I didn't catch that OP was looking to remove the dependency on shared storage. I had merely suggested an HA solution that I thought would fit the need.
January 28, 2015 at 7:56 am
Perry Whittle (1/28/2015)
SQLKnowItAll (1/26/2015)
Not to beat a dead horse... but does anyone see any issues with the quorum disk moving between nodes of this 4 node cluster where only nodes A and B will house InstanceA and only nodes C and D will house InstanceB. I can't think of any strange circumstances offhand, but would like any input.No, the quorum disk would be the only shared storage item that would be exposed to all nodes regardless of where FCIs are to be installed. The key goal is to stop relying on shared storage, either design the cluster appropriately so that it relies on Majority Node Set or use a file share witness
So, as a final response ( I hope). My goal is not to reduce shared storage. In fact, my goal is to decrease redundant storage (In this case only). Also, I don't need automatic failover for the HAG. In fact, I want it Manual. All that being said, do you think the suggestion mentioned about having 1 node passive for both, but 2 nodes as the active for each is a good way to get a majority setup without quorum for my current situation? Else, if I have the original plan with 4 nodes all in the same data center/subnet, is a file share the way to go? I only ask because I have read several places that a file share witness does not hold the full cluster database information, only some metadata and that if the nodes are not multi-site, it is suggested to use a disk witness.
Jared
CE - Microsoft
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply