February 11, 2011 at 7:40 am
Can I set up an active, active cluster such that if there is a failover to a single server one instance of SQL Server gets significantly more resources (memory in particular, but even CPUs) than the other instance?
But only when both instances are running on the same node, if they are both running on different nodes then they can use what they want. I have one server that is more mission critical in terms of performance than the other (which just really needs to be up, but they can take more of a performance hit when a fail over has happened)
I can think of how I can configure the 'loser' so that it will always use less (even when running on it's own node), but configuring this is confusing me.
If you can let me know if I am nuts and/or if this is possible, I would appreciate it.
Thanks
February 13, 2011 at 2:35 pm
Henry Treftz (2/11/2011)
Can I set up an active, active cluster such that if there is a failover to a single server one instance of SQL Server gets significantly more resources (memory in particular, but even CPUs) than the other instance?If you can let me know if I am nuts and/or if this is possible, I would appreciate it.
It's certainly at least possible with Memory although you would need to do a few tests and fiddle a bit with the code.
One way to do this is with a SQL Agent job set to run on server start up. Since a cluster node failover is technically a SQL Agent start, the job would run. It can then check what other instances are on the node and set the memory allocation accordingly. You may need a second job on the other instance to manage it's memory allocation, possibly started from the first job. I don't have the resources or ime at the moment to test, but I don't see why it can't be done.
Cheers
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
February 14, 2011 at 3:15 am
Don't know if anything is possible for the CPUs, but for the memory, isn't this simply a case of specifically setting the "Minimum Server Memory", but leaving the "Maximum Server Memory" at the default for each instance.
E.g. If both your nodes have 16gb memory, set the minimum for instance 1 to, say, 9gb, and the minimum for instance 2 to 5gb (leaving 2gb for the OS). When they are running on separate nodes, they may get more because you aren't limiting the maximum, but when running on the same node, you are guaranteeing instance 1 more memory than instance 2.
My only reservation with this is... would the currently running instance release memory (in excess of the minimum) quick enough for the other instance to receive its minimum allocation.
February 15, 2011 at 5:38 am
I would also set the max server memory for each instance ensuring that in the event that both instances are running on the same node. You could have the "lower priority" instance max lower than the "higher priority" instance.
I have seen a thread at this site where someone is doing this with scripts on failover enabling them to dynamically change the memory. Since that is the case, you could also dynamically configure the cpu's via the "affinity mask" options if that is something you were looking to do.
Steve
February 15, 2011 at 7:01 am
I've previously set up a powershell task on a Windows 2008 cluster (as this has clustering PS) and this was able to tell if both instances were running on the same node and adjust memory allocation accordingly.
I've also done this on an Active/Active/Passive node where the passive node was of a reduced specification.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply