February 23, 2010 at 12:23 pm
I have a concern (that may - or may not be too strong a word) We are running 11 independent 64 bit instances of SQL server on one big clustered server, we may have to do a bit more hands on management of the memory. Because each instance acts as its own decision maker, each will manage its own memory, and probably wont consider that there are 10 other instances on that box. We want each instance to be as efficient as possible and have access to all the memory it needs, but we should be aware that there are a lot of "cooks in the kitchen" and each will consider itself, the chief decision maker as far as what it needs. We have a pretty robust installation and I'll bet not many have installed this many instances on one server in a clustered 64 bit environment.
I'm interested to see if in our circumstance, Microsoft would still recommend that we allow SQL server to manage its own memory, as is widely recommended for almost all installations. We dont really want to limit SQL memory, since it will dynamically allocate and deallocate for each instance, based on need. The server is not fully in production yet and I have left the memory allocation set to the default. Some of the instances do not have any DB's installed yet, and they will come online over the next 18 months. We have had no memory issues for the past two months.
Anyone have any insights or warnings about what might lie ahead? Since SQL server can allocate up to 512gb of memory per instance, and the server has nowhere near that much memory, will multiple instances play nicely together? or will they be constantly battling for memory? Is there something I should be doing, or planning on doing to keep the larger memory intensive instances from taking over the entire server? Seems like this will eventually become an issue as we move more Databases off older servers and these databases start demanding memory. We are nowhere near any problems at this point. I dont have experience with this many instances on one server. Certainly it can be done, but I would sure appreciate some thoughts. - Oh, dismantling it and building virtual servers is not an option. Thank you!
February 23, 2010 at 12:50 pm
No probably about it, each will try to take whatever memory they want and need, even if it thrashes the hard disk or chokes out another instance..
You might see how each instance uses memory, I used to have a process that regardless of how much memory it had available it didn't use more than about 540MB. I am amazed that they still say to let it manage it. I don't think I would be willing to do that if I got a note from Bill Gates himself to do it. I try to never commit more memory than I have but you might be able to do it for spike cases and hope that the spike doesn't flow over all instances.
I think the biggest I have personally seen was 6 instances. I'm a little curious why you would set the server up this way? Is this an active/passive, active/active cluster?
CEWII
February 23, 2010 at 2:03 pm
Hi - Thanks for the reply - Its a passive active 2 node cluster. The reason its set up this way is because most of the instances are replacing fairly small boxes and the thought (the suggestion was made by Microsoft to the company) was that most of the instances slated for this machine are simply not all that busy and would have a small impact. The Client (I am a lowly consultant -I had minimal input on the hardware setup) came up with this as an alternative to running lots of smaller servers. - This will replace about 20 old, smaller servers running various shelf purchased applications. The bigger DB apps are run on separate servers. So this is mostly an effort to consolidate servers and SAN space and utilize cluster technology to improve reliabilty (as Microsoft promised). Some of these DB's are considered critical, though not that large in size. My understanding is that some of these DB's are being moved from servers that are five or six years old and they should save a bundle on replacing that hardware and on operational costs. They decided on the multiple instances based on ability to isolate certain db's by functional areas of the company. The people who have to deal with the overall picture are just a few of us. Everyone will pretty much be working within a single instance. I was a little suprised when they told me they were going to set up that many instances. The 64 bit issue with memory allocation wasnt considered - until now. We arent having any issues, and I would like to head off, or be aware of what I may encounter in the future.
When I spoke to Microsoft about an issue we were having during the installation (after getting to the higher level support team) The didnt seem to flinch or have any concerns about the install, so I didnt worry. Everything up to this point has been good. We probably have about 30 percent of the DB's migrated to this server.
The 2 clustered servers are very hefty, dual quad core, 28 gb of memory for each node.
So, tell me about this 6 instance server - is it 64 bit? If so, have they had to do anything about limiting memory allocation? If 6 instances can play nice on a server, I'd hope (really hope!) the 11 will do so likewise.
I should also add that they have the Same 11 instances in their 64 bit test environment but in a non-clustered environment. I can play around with memory allocations there if I need to.
Just so you know, If you are chastising me for the setup, consider it done and now we can move on. - But really, thank you for your input.
Cheers!
February 23, 2010 at 2:27 pm
It is a 6 instance with 32GB of RAM, 64-bit 2005 Enterprise, I can't remember but I seem to remember 8 cores, each instance has its own portion of that 32GB set aside and there is no overcommiting of memory. The all started overcomitted, but after a month or so we got a handle on how each instance used memory. Some were allowed to have a lot, others were scaled WAY back. We held out some of the memory for the OS. There was a LOT of SAN tweaking to seperate the disks up as much as possible, especially from the heavier hitting instances. We did not use processor affinity and we very rarely ran into CPU bottlenecks given the nature of the processing. I tend to steer people away from processor affinity tweaks and have only rarely used them myself, but I don't flat out take it off the table.
Also not chastising but concern, given my experiences with SQL and memory management, it is more like
SQL: tell me how much you have OS
OS: I have 32GB
SQL: give it to me
I have seen overcommited memory cause severe thrashing of the disk and performance tank, far worse than just limiting memory on an instance. As far as having a test box, the one thing I would do with that is called a one-node cluster. It looks more like the prod but there is nothing to fail-over to. It also lets you see how the tools act in that environment. It is technically a cluster but it is one box. It is handy for environments that don't have the money for a second box just yet but KNOW they want to cluster later, it is a lot harder to cluster it later so seting it up as a cluster to start makes adding additional nodes VERY easy.
Clustering is its own beast, I have also recently gotten some experience with PolyServe and that is kind of an interesting beast too..
CEWII
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply