May 14, 2013 at 5:58 pm
how would you approach it if you need to install as follow:
Prod site: FCI over 2 nodes
DR: FCI over 2 (or single) nodes
AlwaysOn between the two FCIs
So all 3 (or 4) nodes must be part of same WSFC. what if the were both already running prior to your AlwaysOn plans, how do you 'combine' the two separate WSFCs then ?
then with shared storage, each FCI set has shared storage only between then, not between sites. how would you set this up in Cluster Manger / how do you validate the cluster and tell it to validate the storage only between 2 Prod nodes, and also validate the storage for the 2x DR nodes only, if all 4 nodes are in the same cluster now?
May 24, 2013 at 11:57 am
...tell it to validate the storage only between 2 Prod nodes, and also validate the storage for the 2x DR nodes only, if all 4 nodes are in the same cluster now?
That's what I am struggling with now in my virtual lab,
node1 > storage1
node2 > storage1
works like charm, then I've added
node3, node4, and I want to add storage only for node3 and node4, and there configure second WFC, and later run AO between them
my final idea is
node1,node2 : same dc1, part of 4node metro cluster, hosting SQL FCI "DC1SQL"
node3,node4 : same dc2, part of 4node metro cluster, hosting SQL FCI "DC2SQL"
AO from DC1SQL to DC2SQL
seems impossible...
May 24, 2013 at 5:57 pm
salkasalka (5/24/2013)
node3, node4, and I want to add storage only for node3 and node4, and there configure second WFC, and later run AO between them
my final idea is
node1,node2 : same dc1, part of 4node metro cluster, hosting SQL FCI "DC1SQL"
node3,node4 : same dc2, part of 4node metro cluster, hosting SQL FCI "DC2SQL"
AO from DC1SQL to DC2SQL
seems impossible...
I got mine to work in the end, as follow:
* all nodes HAVE to be the same WSFC (as per documentation). cannot add a node into a WSFC if its already part of another.
* no magical 'merging' of 2x separate WSFCs
* pre-validate your 2x DC1 nodes, including storage. then pre-validate your 2x DC2 nodes, including storage. when all good, then create your 4x node cluster, and run validation, excluding storage.
* from there its just a matter of setting possible-owners on resources
* your 2x FCIs should have the same drive letters and paths to make AO easier to manage. bringing them into the same WSFC is not a problem, but while they are under the Available Storage default resource group you can obviously not assign the same drive letter twice. So at this stage you create your Roles in the WSFC, one for each FCI. First you bring all the Available Storage online, say on node1. half of them will stay offline, because they can physically only connect to node3 & 4. you create your Role and assign those disks to the new role. This takes them out of Available Storage, so you can re-use drive letters again before creating your Role2, for your FCI2
* now you have 2 Roles/Resource groups ready, so during install of Clustered SQL you just select this pre-staged Role.
* when both FCIs are installed, make sure possible-owners are all correct on all resources. (AG group create will actually fail/warn if some is incorrect)
* create AG & Listener and you're good to go!
May 25, 2013 at 1:05 am
it's working, thank you! 🙂 great help was also that MS whitepaper (link provided by Marios Philippopoulos)
May 25, 2013 at 6:22 am
Hey all, thanks for the feedback.
I ended up opening a case with Microsoft, asking them about the requirement for different instance names. After some back and forth, Microsoft has confirmed that the requirement for different instance names does indeed apply in the scenario of 2 Failover-Cluster Instances (FCIs) on 2 different subnets, as part of the same WSFC cluster.
In my case - one FCI on subnet1 and one standalone instance on subnet2 - the instance names can be the same;
Microsoft supports this configuration.
I hit a couple of snags, however, while configuring this setup:
(1) While installing the SQL standalone instance on subnet2 I got this error:
The directory d:\xxxxxxxxx\xxxxx is not a valid path. Standalone sql server on wfc cluster cannot have files on shared disk locations
The remedy for this was to evict that node from the WSFC cluster, do the SQL install, and add it back:
(2) Error while creating the Availability Group:
Failed to create, join or add replica to availability group 'MYAG1', because node 'MyNode3' is a possible owner for both replica 'SRVS1\Instance1' and 'SRVPH1\Instance1'. If one replica is failover cluster instance, remove the overlapped node from its possible owners and try again.
The cure here was to go to Failover Cluster Manager and unselect the 3rd node, SRVPH1, (where the standalone SQL instance was located) from the list of possible owners of the FCI.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
August 31, 2013 at 1:52 am
Perry,
Your article suggested you could make the single Listener HA, Could you explain how this is achieved please ?
January 27, 2014 at 5:16 pm
Thanks fro this article, but you don't actually say HOW to do this...I want to setup an AO group for an existing failover cluster and it isn't clear which wizard or settings to use (do I specify the nodes or VIP or something else when setting up the cluster and replicas?) Documentation on this is very sparse.
January 27, 2014 at 7:27 pm
Hikmer (1/27/2014)
Thanks fro this article, but you don't actually say HOW to do this...I want to setup an AO group for an existing failover cluster and it isn't clear which wizard or settings to use (do I specify the nodes or VIP or something else when setting up the cluster and replicas?) Documentation on this is very sparse.
Thats because this is not a deployment guide! Thats a whole separate article. Deploying an AO group is no different when using an FCI replica you just need to be aware of the increased complexity. This[/url] guide shows the typical wizard options when creating the AO group.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
January 27, 2014 at 8:25 pm
Thanks, yes I have deployed an AlwaysOn Availability group and these instruction are straight forward...what I am lacking is a guide to combine Fail-over Clustering with AlwaysOn Availability. I have two 2 node failover cluster environments (SQL 012 with shared disks, etc.) and want to create an Availability group between them for added redundancy and DR....however I cannot figure out how to do it..or even if it is possible. People alluded that is it online but I have yet to find this mystical guide. One article combines two different technologies...the other walk through leaves Failover Clustering out completely...or am I wrong?
PS You also don't mention how to create the secondary database to Join in the Always On Wizard, and while trivial to some, it was a step I had some issues as I needed to leave it in a recovering sate. Not trying to be a jerk...just struggling with this as Microsoft doesn't seem to have an good articles on this subject matter.
January 28, 2014 at 12:29 am
Hikmer (1/27/2014)
what I am lacking is a guide to combine Fail-over Clustering with AlwaysOn Availability.
I have a guide being published soon on SSC showing how to achieve this, starting from scratch.
Hikmer (1/27/2014)
I have two 2 node failover cluster environments (SQL 012 with shared disks, etc.) and want to create an Availability group between them for added redundancy and DR....however I cannot figure out how to do it..or even if it is possible.
All nodes that have an instance, clustered or non clustered, that is to participate in an AO group must be part of the same windows cluster. What you are attempting is possible but only for migration purposes, it is not a supported permanent solution, as objects (like the listener) cannot move between clusters.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
July 17, 2014 at 2:21 pm
I have a question about storage. I have a 3 node windows cluster. 2 FCIs on san storage and 1 node on a separate san. For Always On you need to have the same lettered drives correct? And you should be able to do that with a Windows Cluster? My storage guy is telling me they can't be the same drive letters if they are in the same windows cluster.
July 25, 2014 at 11:28 am
3 node windows 2012 failover cluster
sql2012 installed on node 1 and node 2
node 1 and node 2 -> instance 1 using V: on san as clustered storage
from node 1 v: drive information (get-disk ## | select-object *)
* IsClustered : True
node 3 -> no instance yet using V: on san as standalone storage
from node 3 v: drive information (get-disk ## | select-object *)
* IsClustered : False
To set the drive letter to V: in 2012 as V: is already in the cluster we had to use "Computer Manager"
I'm thinking the SQL install is seeing the cluster V: and not the local V:
So, how to get SQL to see the local V: and *should* SQL be installed on the local V: instead of an available drive letter.
Error message when trying to install SQL on node 3 V:
The directory V:\{path} is not a valid path. Standalone SQL Server on WFC cluster cannot have the files on shared disk locations.
July 29, 2014 at 5:38 am
david.puckett 38841 (7/25/2014)
3 node windows 2012 failover clustersql2012 installed on node 1 and node 2
node 1 and node 2 -> instance 1 using V: on san as clustered storage
from node 1 v: drive information (get-disk ## | select-object *)
* IsClustered : True
node 3 -> no instance yet using V: on san as standalone storage
from node 3 v: drive information (get-disk ## | select-object *)
* IsClustered : False
To set the drive letter to V: in 2012 as V: is already in the cluster we had to use "Computer Manager"
I'm thinking the SQL install is seeing the cluster V: and not the local V:
So, how to get SQL to see the local V: and *should* SQL be installed on the local V: instead of an available drive letter.
Error message when trying to install SQL on node 3 V:
The directory V:\{path} is not a valid path. Standalone SQL Server on WFC cluster cannot have the files on shared disk locations.
You need to:
(1) Evict the standalone node from the WSFC cluster
(2) Install the sql on that node
(3) Add the node back to the cluster.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
July 29, 2014 at 6:17 am
Marios Philippopoulos (7/29/2014)
You need to:(1) Evict the standalone node from the WSFC cluster
(2) Install the sql on that node
(3) Add the node back to the cluster.
Be very careful here, when you introduce an FCI into a cluster that is to be used for AlwaysOn Availability groups a number of restrictions apply.
The cluster is checking existing groups and has found that a clustered instance of SQL Server already exists on the drive\volume V:\
What happens if at a later date an administrator comes along and runs the add node wizard on one of these cluster nodes?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 12, 2014 at 10:12 am
I still need to re-read this article a few times to make sure I understand everything, but I do have one question after the first read through. When combining FCI and Availability Groups, are stand alone (non FCI) servers required? Or is that just how you designed this particular example?
Viewing 15 posts - 16 through 30 (of 44 total)
You must be logged in to reply to this topic. Login to reply