October 23, 2013 at 9:38 pm
Comments posted to this topic are about the item Managing MaxMemory Values on an Active-Active Two Node Cluster
October 23, 2013 at 10:14 pm
That's a good article 🙂
Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).
October 24, 2013 at 6:05 am
Prudhviraj (10/23/2013)
That's a good article 🙂Can add one more line - set schedule for jobs 1,2 and 3 to one time(run on demand).
Hi Prudhviraj!
Glad to hear you liked the article.
You certainly could create a job schedule (on demand) for those jobs, but I don't think it is necessary. They run when called by the "Alert and set MaxMemory on Restart" job. Would you please elaborate and state why you want to add an "on demand" schedule for those jobs?
Mike
October 24, 2013 at 8:15 am
Interesting article. I'm questioning the comment regarding "wasting memory" when the max memory setting is split between the two instances. Technically you would NOT want to split that memory evenly as you could choke the OS. Agreed that if the second instance is not running on the other node, then it be construed that the memory is being wasted, however; SQL will only pull the memory that it needs. It does not automatically assign that max memory to the server and reserve it. Actually if the OS needs some, SQL will release the memory back to the OS. I have seen an instance not start because there was insufficient memory but that is not that easily reproducible. The min memory setting does however have the ability to cause a problem as you are referring to.
October 24, 2013 at 9:32 am
Excellent article
October 24, 2013 at 10:03 am
I would expect most instances to reach the MAXMEM setting at some point unless the DB is smaller than the memory available and SQL Server will not always give the memory back to the OS quickly enough, causing paging. Being cautious, I would be concerned about expecting two instances to run in half the memory they normally run in and would probably prefer them to run at the smaller setting permanently, with the option to increase it manually if you are doing something memory intensive as a special case.
October 24, 2013 at 10:25 am
Very interesting and great idea to assign max memory dynamically! Thanks for sharing.
We currently have a cluster failover alert job runs every 1 minute to check whats the current physical node of each instance. A simple record is kept in a DBA database table. This way we do not have to deal with enable agent start-up job or using _xp to reach file system. I guess I could easily expand our routine to incorporate this procedure to assign max memory settings upon failover.
October 24, 2013 at 11:09 am
A couple of things caught my eye:
1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.
2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.
3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop
In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.
October 24, 2013 at 11:20 am
lptech (10/24/2013)
A couple of things caught my eye:1) The jobs would run if the cluster was actually running active/active with the instances running on the opposite nodes.
2) After the failback, the job has to be manually run. This would add an extra thing to keep track of during routine maintenance such as applying security hotfixes.
3) Keeping everything up-to-date on more than a few servers is a pain, and not likely to happen 100% of the time in a medium to large size shop
In an ideal world, these jobs would be scripts that run off of alerts from the monitoring tool. The node and memory information could be kept in a Configuration management Database (CMDB), with blackouts in effect during maintenance windows. Of course, we don't always live in the ideal world.
The job “Alert and Set MaxMemory on Restart” should be scheduled to “Start automatically whenever the SQL Server Agent starts”. That will fire off the jobs to set MaxMem whenever a fail-over occurs, or if you manually move an instance to a node. You should not have to manually run any of the jobs.
October 24, 2013 at 1:13 pm
There should be a logic in the job to check if the fail over happened. otherwise , it will keep reset the MAXMemory setting to the same value.
October 24, 2013 at 1:25 pm
Andrew L. Smith-213145 (10/24/2013)
What about the memory adjustments that each instance is supposed to do in the case of a fail-over to the same node? It is my understanding that each instance cooperates with each other and the OS to scale their memory usage to an acceptable level for each.
I want to be able to control my memory to give more to my ProdDBs and less to my ReptDBs if they are both on the same node.
October 24, 2013 at 1:30 pm
qiyuef (10/24/2013)
There should be a logic in the job to check if the fail over happened. otherwise , it will keep reset the MAXMemory setting to the same value.
The code to set the MaxMem will only run when the SQL Server Agent is started. Usually that doesn't happen too often.
October 25, 2013 at 11:34 am
This was a great article. I implemented this on a cluster we are putting in production this weekend. I thoroughly tested it and it worked like a charm. This is a big help.
October 26, 2013 at 10:18 am
Good idea - but you might want to adjust the max mem settings back to default if one of the two instances leaves the node. So if you add a schedule running hourly that resets the max memory to full capacity if the instances are running alone on the node, it will look almost perfect.
October 26, 2013 at 10:30 am
Henning Peter Jensen (10/26/2013)
Good idea - but you might want to adjust the max mem settings back to default if one of the two instances leaves the node. So if you add a schedule running hourly that resets the max memory to full capacity if the instances are running alone on the node, it will look almost perfect.
As soon as an instance is failed-back, so that each instance is each running on a separate node, these scripts/jobs will reset the max mem to their defined high values - in my examples to 245GB.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply