February 3, 2020 at 8:06 pm
Admingod wrote:Secondary Replica 2 (same location Sync mode for fail over) There should be no blocking, unless you have some really bad queries as the secondaries are read-only. Why would you want two different places to do reporting? See above, read-only routing, and this will also need to be fully licensed.
Be very careful with this type of configuration. If the goal is to setup a secondary server to offload reads for the application and reports executed from within the application - then this could be a valid configuration. If the goal is to offload reporting (ex: SSRS/Tableau/etc.) and data extracts then this configuration could cause substantial performance issues on your production system.
It appears to me that you want a read-only secondary available for reporting and data extracts. In that case - this secondary node should only be accessible directly and not through the listener. This way you can grant users access to the reporting system without having to give them access to the primary database(s) - nor can they accidentally run something on production (forgot to include ApplicationIntent on their connection, read-only secondary down at this time, etc...).
Not sure I agree Jeff. If there is a primary and a read-only secondary, when people use the secondary by connecting directly to it, you are creating a situation where they may actually be connecting to the primary in the event of a fail over.
A common situation here is that the AG fails over, normally for patching. We are a 24/7 world wide company. There are any number of reports, and certainly some power users performing ad-hoc queries, that are being run when the server's roles are reversed.
By setting up read-only routing, and ensuring that the report connections, as well as the various data extracts and end-users are using ApplicationIntent=ReadOnly, they are always hitting the secondary.
Less common was replacing and / or upgrading the individual servers that make up the AG. We added new nodes, and removed the old nodes. Nobody had to change anything.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 4, 2020 at 9:07 pm
The OP wants to setup a read-only secondary for 'reporting' and extracts. I would not recommend allowing any external users access to a production system to perform data extracts.
In this configuration you would never fail over to this node and make it the primary (in fact - I would recommend setting the node weight to 0 and effectively having a 3 node cluster). It is solely built for reporting and data extracts - and other projects that require large sets of data to be manipulated. Additionally, you have finer control over who has access and can insure that no one has access to the primary production instance directly. Those users only have access to that read-only secondary...
The OP outlined a 4-node cluster:
The instance would fail over (even set it up to be automatic) between nodes 1 & 2, manual fail over to node 3 - and never fail over to node 4. This allows for a reduced environment for Node 4 - it does not have to have the same number of CPUs or memory which can be quite costly with Enterprise Edition.
By adding read-intent to node 4 - you introduce added complexity that isn't needed. If the OP requires offloading of reads to a secondary, then that could be enabled on node 2 and maintained between those 2 nodes - with the understanding that reads will not be offloaded if one of those 2 nodes is offline or the system has been manually failed over to DR.
For licensing - the primary node and the read-only secondary would need to be licensed, but the other nodes would not (unless read-intent) is enabled.
As to why you want separate locations...do you really want the EDW development team performing large queries to extract data during business hours directly from your production system? Do you really want to give them access to your production instance - and hope they don't forget to include ApplicationIntent in their connection when they start 'testing' a new extract from the main tables for your application? All it takes is one 'bad' query that creates a Cartesian product that was 'accidentally' run on the primary instance...
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 21, 2020 at 11:32 pm
Thanks! Another question on same scenario, since Node1(Primary), Node2( secondary HA) and Node4(Secondary Reporting) are in the same location and Node3(Secondary Asynch DR) is in the DR location. Let's say if primary location goes down and after the manual failover to Node3(DR) becomes primary the reporting users will not have Node4 available until the primary location is back online. So is there any option other than adding another replica?
February 22, 2020 at 3:50 pm
If you want reporting to be available in that situation - you would need an additional secondary in the DR location to support that requirement. However, in a disaster scenario the goal is to have at least the minimal required services available to support the business - not necessarily every single service.
You also need to consider how long would you be on the DR system - and whether or not reporting being down that long is going to be a significant enough factor to the business.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 23, 2020 at 1:52 pm
Thanks! Another question on same scenario, since Node1(Primary), Node2( secondary HA) and Node4(Secondary Reporting) are in the same location and Node3(Secondary Asynch DR) is in the DR location. Let's say if primary location goes down and after the manual failover to Node3(DR) becomes primary the reporting users will not have Node4 available until the primary location is back online. So is there any option other than adding another replica?
If you want everything to work the same as it did on the Primary site on the DR site that you've flopped over to, then the DR site pretty much has to be identical to the Primary site. It's kind of like having a spare car... it may do if it doesn't have windshield wipers or a spare tire but it's just not going to serve its purpose if it doesn't have a steering wheel and only two wheels.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 19, 2020 at 10:17 pm
Thanks you guys are awesome! Another thought came in my mind is building our production High Availability AG pair of servers as 2 2 node cluster instead of single node for each. This adds the cost with the benefit of additional redundancy on each node, the benefit would be relatively seamless failover for windows server upgrades. Other than the cost, managing and setup that will likely be overkill sometime because AG + fail over clustering. Do you see any caveats? Thanks in advance!
March 20, 2020 at 5:37 pm
Not sure I understand the new configuration you are looking at - but having an FCI and an AG in the same cluster does work. You cannot have 2 clusters though...unless you build out a distributed cluster, and that does get much more complicated.
I utilize a 3-node cluster now - with 2 nodes setup in an FCI and the third-node configured for reporting. Only the databases needed for reporting are setup in the AG, there is no listener configured, no failover, third-node has a node weight of zero and the third node has half the CPUs and Memory that is configured for production.
Users that access the read-only secondary only have access to that node directly - they cannot access the production instance in any way (again - no listener). This provides HA and minimal downtime for patches - and we can take down the reporting node without impacting the cluster.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
March 20, 2020 at 8:20 pm
Thanks Jeffrey. So you have basically 2 node setup for FCI and the 3rd node is just AG for reporting right? No DR then? In my case i will have AG and then FCI both for benefit of additional redundancy.
Sorry let me re-phrase and give some info. Hope this help you to provide any recommendations you can think of.
So the plan was something like this for my AG setup as below, on our main data center i will have Node1, Node2 and Node 4, However, the Listener would be for Node1, Node 2 and Node3 (which would be on a DR site different location). Node1 to Node 2 synchronous mode and Node1 to Node3 asynchronous mode. Node1 - HA and OLTP , Node2 - HA, Read only traffic, backups, DBCC, Node 3 - DR site, Node 4 - Ad hoc and reporting source.
Now i am thinking to have HA pair of servers i.e. Node1 and Node2 as 2 2-node FCI clusters instead of single node for each. Any thoughts i said previously other than the cost and managing do you see any caveats?
March 21, 2020 at 4:28 pm
Correct - we don't have a DR node at this time...but it would be easy to add another node to the cluster and build out an AG for all databases.
You cannot setup 2 2-node clusters unless you build out a distributed AG, and that gets much more complicated. If the goal is to build out a DR system that can take on all functionality then yes, setup a distributed AG with the same number of nodes in the DR site that you have in the production site.
Please review the documentation related to a distributed AG - it will clarify the setup and configuration.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
March 23, 2020 at 12:42 am
Thanks! The goal is not to have a DR System to take all the functionality. Node1, Node2, Node4 would be on Primary site and Node 3 is the only one on DR Site. The listener would be for Node1,Node2 and Node3. So still i would need to have a distributed AG in order to setup FCI? May be i am missing something?
March 23, 2020 at 5:20 pm
So you saying Failover clustering 2 2 node with AG cannot be configured using traditional AG, so it has to be setup with distributed AG? If you can explain one scenario of failover which we can not avail in traditional AG?
March 23, 2020 at 7:24 pm
I don't understand...if you have the following:
Where does an FCI fit into this configuration? And where would the other FCI fit in? If you want to use FCI for HA then you would set it up this way:
This does not provide for a read-only secondary at the DR site, but if you have a situation where you are going to be running from DR for an extended time you could add another node that is configured as a read-only secondary at the DR site.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
March 23, 2020 at 8:37 pm
Thanks Jeffery. You're awesome! Lets take below scenario and ignore node 4. That means for FCI between Node1 & Node2 i would need 4 servers and AG would be configured between Node1, Node2 and Node3. So to make the same redundancy DR would need another node that means 2 servers for DR. Total of 6 servers needed right? And the 3 pairs will be sharing the storage? Correct me if i am wrong.
Node 1 - FCI, shared storage between nodes 1 & 2
Node 2 - FCI, shared storage between nodes 1 & 2
Node 3 - DR Secondary (AG) - asynchronous
March 23, 2020 at 9:04 pm
No - you would not need any additional servers and there wouldn't be any additional AG's.
Nodes 1 & 2 have shared storage and since SQL Server is installed in the cluster we have an FCI between these 2 nodes and a single instance of SQL Server.
Nodes 3 & 4 are part of the Windows cluster - SQL Server is not installed in the cluster and storage is not shared. Each node has its own instance of SQL Server.
We now have 3 instances of SQL Server - with the primary instance installed in the cluster as an FCI across nodes 1 & 2. A separate instance of SQL Server on node 3 and a third instance on node 4.
We then setup AG1 - this AG is configured from the primary instance to the secondary instance on node 3. All user databases are included in this AG. It is setup as asynchronous with manual failover and utilized for DR.
We now setup AG2 - this AG is configured from the primary instance (FCI) to the secondary instance on node 4. Databases required for reporting are included - and only those databases. No need to configure all databases...it is setup asynchronous with manual failover (but we will never failover to this instance).
4 servers (nodes)...if you then want an additional read-only secondary for DR then you add an additional server (node 5) in the DR site and setup AG3 with the same databases as AG2 from the primary instance to the new instance on node 5. For access to the read-only secondaries, instead of using a listener you setup an entry in DNS and when you failover to the DR site you update DNS and switch the IP address from node 4 to node 5.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
March 27, 2020 at 7:24 pm
Thanks! Let me think and see if i find any gap in this setup. Can you think of any?
Viewing 15 posts - 16 through 30 (of 39 total)
You must be logged in to reply to this topic. Login to reply