Active/Active Clustering - Max Memory

  • I'm just curious as to what people do with an Active/Active cluster in regard to the max memory option.

    I'm building an Active/Active cluster with one SQL instance on each node, and that they're able to fail over to each other.

    Both servers have 64GB of memory, and I plan on setting the Max Server Memory value to 55GB for each instance. I can see a memory issue occurring if one instance fails over to the other Active node though with both having max memory values of 55GB.

    How do you all handle this? An obvious solution is to assign 28GB of RAM to each instance but that just wastes the amount of the RAM in the server to begin with and I don't want to do that.

  • Increase the RAM gradually,First check the requirement of the instance with this script

    select * from sys.dm_os_performance_counters

    where counter_name like 'page life%'

    If this value is less than 3000 then you should increase the RAM otherwise increase the RAM will not give any benefit in case of performance

    How much databases Sizes ? of both instances ?

    I think you should start with 30GB max memory parameter on each nodes

    you can define More RAM as per requirement.

    Regards,
    Syed Jahanzaib Bin Hassan
    BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA

    My Blog
    www.aureus-salah.com

  • Thanks Syed, but how would I solve this when I end up giving greater than 32GB per instance.

  • PhilipC you should leave 4GB RAM for OS operations , now you have 60GB Total RAM on each machine and note it you are defining MAX Memory parameter not Minimum Memory so memory will increase if available on a Node

    Example

    =====

    if you have defined 30 GB max memory parameter on each machine and 1 node is down and then instance come to the other node then it will allocate RAM as needed if available

    Total RAM 64 GB

    =4GB of OS + 30 GB 1 instance + 30 GB 2 instance

    you can define 32GB on each node also but there is no RAM will be available for the OS then paging will be involve and performance must be degrade

    Regards,
    Syed Jahanzaib Bin Hassan
    BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA

    My Blog
    www.aureus-salah.com

  • PhilipC (5/29/2011)


    but that just wastes the amount of the RAM in the server to begin with and I don't want to do that.

    That's the whole point with clustering. Nodes are over specced to be able to handle the workloads that may be passed to them.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Syed/Perry - surely though it makes more sense to use the bulk of the memory available though, and on fail-over adjust the max memory value to suit otherwise you're just wasting perfectly good RAM is how I see it.

  • that is unfortunately a downside of clusters, wasted RAM and possibly even CPU. If you catch the system upon failover and trim the memory back then fine, what happens if you're on holiday though and it fails over 😉

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • That was where I was hoping there was a way of triggering an update on failover to automatically adjust the max memory value(s).

  • If Node 1 running max memory parameter setting is 32 GB and Node 2 max memory setting also 32 GB

    If Node 2 down and second instance move on Node 1 then if First instance have available memory then SQL Server will release for the Second Instance ,If SQL Server allocate the whole memory and it dont have available memory then second instance might be problem on start

    I said in the previous post first check the usage of RAM and increase gradually,what is the size of databases on each instances

    you can check startup parameter -f for default value of max memory paramter for the second instance

    Regards,
    Syed Jahanzaib Bin Hassan
    BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA

    My Blog
    www.aureus-salah.com

  • well i have never bothered myself but if you query the parameter

    SERVERPROPERTY('ComputerNamePhysicalNetBIOS')

    this returns the actual node name the instance is active on.

    Apply some logic and check this for both instances and if it matches (i.e. both instances are active on the same node) set the RAM back via SP_CONFIGURE

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (5/30/2011)


    well i have never bothered myself but if you query the parameter

    SERVERPROPERTY('ComputerNamePhysicalNetBIOS')

    this returns the actual node name the instance is active on.

    Apply some logic and check this for both instances and if it matches (i.e. both instances are active on the same node) set the RAM back via SP_CONFIGURE

    This is a good idea as configuring min and max memory setting is dynamic in SQL Server

    M&M

  • Hi PhilipC

    You set Max Memory based around leaving sufficient Memory for OS (eg if OS needs 8gb then set MaxMemory = 56GB)

    You then set MinMemory to allow for case where both instances are running on same node.

    If you have found that both Nodes normally require > 28GB then you would set MinMemory = 28GB.

    Ths way memory is not wasted while each instance is running on own node but in case of a failover there will not be an over commitment of Memory.

  • That's great, thanks very much Patrick. I knew there had to be something you could do to handle occasions like this.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply