Do I need (is it recommended) a second quorum drive in active/active cluster?

  • Hi,

    I have never installed active/active cluster before.  Do I need a second quorum drive in active/active cluster?

    Thanks.

  • No. The quorum contains the cluster configuration and you should only have one of these. Likewise, if you're installing on Windows Server 2003 Enterprise Edition, you'll only need one drive for MS DTC. You'll need separate drives for each of your SQL Server instances. So if you have active/active, you'll need at least 2 there. Breaking that down, here's what it would look like (minimum):

    1 Quorum

    1 MS DTC

    1 Data Drive for Instance 1

    1 Data Drive for Instance 2

    Of course, data/log separation, etc. rules that apply for optimal performance also apply with respect to a clustered instance. So it wouldn't be unusual to see multiple drives for each SQL Server instance in order to split out tran logs, etc.

    K. Brian Kelley
    @kbriankelley

  • Brian,

    Thank you very much.  What confused me was a line in this article ..."Only one node can own the quorum drive at a time." (http://www.sqlservercentral.com/columnists/bknight/clustering_a_sql_server_machine__2.asp?Tab=2)  So, if noda A is the primary for Clust A and node B is primary for Clust B then both of them can write to the quorum drive regardless of who is the owner, right?  Thanks.

  • No. Brian's statement is correct: only one node owns the quorum at a time and that means only one node is going to see the quorum at a time. The quorum should only be for management of the cluster, which means there shouldn't be any user processes which touch it. Therefore, there should not be a need for both nodes to ever write to the quorum at any given time.

    K. Brian Kelley
    @kbriankelley

  • Brian, I guess I'm not following you on that last sentence (I guess I'm easily confused :blink.

    Quorum drive is used by Cluster Service to store its checkpoint and log files, which are used by the Cluster Service to communicate between nodes of the cluster.

    Ex. Node A is the primary for Clust A and Node B is primary for Clust B. Node A owns the quorum drive. If Node B fails how would Clust B know to fail over to Node A if it was not writing to the quorum drive?

    Thanks.

     

  • The nodes don't use the quorum in order to keep track of who is alive and who isn't.

    One of the nodes is going to have the main cluster group. That contains the name of the cluster, the IP address of the cluster, and the Quorum drive. That node also is basically in charge of the cluster. It is the only node that does any writing to the quorum drive.

    Now, as to how the nodes know who is alive or not... When you configure a cluster, you should have a private network connection between all of the nodes on the cluster. the nodes take turns pinging each other over that private network to verify servers are alive. If a ping fails on the private network, the nodes will then try on the public networks the cluster is aware of (the networks the clients connect in on).

    In the event that a node becomes unavailable, then the node in charge of the cluster will typically initiate the fail-over. If the node that was in charge of the cluster is the one that becomes unavailable, another node will attempt to seize control of the resources and get the main cluster group back on-line. It'll also fail over any resources that were on that node.

    K. Brian Kelley
    @kbriankelley

  • Brian,

    This helps.  Thanks for your help and time.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply