October 9, 2015 at 8:36 am
Hello all,
I have the following SQL FCI configuration:
NODE1 -256GB
INST1 - 64GB min/64GB max
INST2 - 64GB min/64GB max
NODE2 - 256GB
INST3- 64GB min/64GB max
INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run.
Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB
INST1 - 64GB min/128GB max
INST2 - 64GB min/128GB max
NODE2 - 256GB
INST3- 64GB min/128GB max
INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
Regards,
Arestas
October 9, 2015 at 8:43 am
Those new values are way too high, you're leaving no memory for the OS under normal conditions, and near guaranteeing major memory contention if there's ever a failover. Even the old settings had min server memory way too high. The OS needs memory.
Maybe something more like
Min 40, max 110
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 9, 2015 at 8:48 am
Yeah,
In Implementation ive made the calcs with 250G. My doubt is if i the SUM of max memory (if all instances are running on the same server) can be more than the RAM i in fact have.
Will SQL rearrange memory so all instances can run in same server with no memory problems?
Regards,
Arestas
October 9, 2015 at 8:51 am
If one instance needs 128GB then it will use 128GB and shrink the other 3 instances down to their minimum levels to get what memory that one instance needs.
If you want to ensure all the instances can run on 1 node without the OS forcing SQL to give up RAM to give to another instance, you will need to set them all to around 57GB MAX, which will leave around 26GB free for the OS to do what it needs to do and for SQL to manage itself
October 9, 2015 at 8:53 am
Probably still too high, the OS needs more than 6GB or memory to manage 256GB. I'd suggest no higher than 110 for each instance to start with, or you're going to have memory contention between SQL and the OS.
If everything fails over, there will be bad memory contention. The existing instances won't balance their memory usage out with the new ones, they don't know about the new instances. You'd probably end up with the two existing instances on the node having over 100GB and the two that failed over to it having 10-15GB each and the OS struggling for memory.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 9, 2015 at 9:05 am
Do you guys know any good documentation about this?
So final would be,
NODE1 -256GB
INST1 - 32GB min/60GB max
INST2 - 32GB min/60GB max
NODE2 - 256GB
INST3- 32GB min/60GB max
INST4- 32GB min/60GB max
OR
NODE1 -256GB
INST1 - 32GB min/100GB max
INST2 - 32GB min/100B max
NODE2 - 256GB
INST3- 32GB min/100GB max
INST4- 32GB min/100GB max
Regards,
Arestas
October 9, 2015 at 9:31 am
Which one you go for depends on your requirements.
If node 2 fails, and instances 3 and 4 fail over to node 1, are the applications which use them required to perform just as well as if node 2 hadn't failed, or do they just have to be working even if with very degraded performance?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply