SQL 2008 - Min and Max server memory

  • As a side note, when I was running the SQL Server Certified Master program at Microsoft Learning, I proposed adding Jonathan's book to the official recommended reading list for Master candidates. So don't let the title mislead you. The book is for everyone, not just newbies.


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

  • Pages 144-145 of the book Gail recommended provides a general initial configuration recommendation that is designed to be safe for dedicated SQL Servers, but it also makes specific recommendations for how to monitor and determine if the value for max server memory can be increased to take advantage of available memory to maximize the size of the buffer pool for caching based on your server workload. It's not rocket science and it doesn't require any real experience to properly configure SQL Server memory settings. It's all explained in Chapter 4, with a lot of internal details about how SQL Server and SQLOS memory management works added in as well.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • I prefer to leave minimum 1 or 2 GB to dedicate to OS or any small application running on that server by setting max server memory to an appropriate value. If you left max server memory with default configuration, the problem I used to face is the OS become very unresponsive. You can't do any operation on that server.I always set max server memory based on the total memory available and number of instance running on that server.

    For more detail on the same line

    http://www.practicalsqldba.com/2012/03/memory-configuration-in-sql-server-and.html

    Thanks

    Nelson John A

    (www.PracticalSqlDba.com)

  • nelsonaloor (3/17/2013)


    I prefer to leave minimum 1 or 2 GB to dedicate to OS or any small application running on that server by setting max server memory to an appropriate value. If you left max server memory with default configuration, the problem I used to face is the OS become very unresponsive. You can't do any operation on that server.I always set max server memory based on the total memory available and number of instance running on that server.

    For more detail on the same line

    http://www.practicalsqldba.com/2012/03/memory-configuration-in-sql-server-and.html

    Thanks

    Nelson John A

    (www.PracticalSqlDba.com)

    That only works for small servers. If you try to set 'max server memory on a 512GB RAM Server to 510GB you are guaranteed to have problems on the system. The point Gail was making earlier in the thread with regards to the previous mention of 1-2GB for the OS, is that it takes memory in the Windows Kernel to map memory, so on larger systems 1-2GB for the OS is ludacris. You have to account for the kernel overhead for managing all of the memory which is what the calculation in my book is geared at doing.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • OK, let me give you a little background into our environment and why I manage it the way I do:

    The server is a 16-core machine with 64GB of memory. We have a lot of databases on it (around 150) spread across multiple instances. The more idle databases sit in one instance whereas the more active ones sit isolated in their own instances with their own memory allocated. The most transactional database we have requires around 15GB at peak usage.

    I have calculated that in total, around 40GB is in use at any one time by the instances at peak time. That means naturally that 24GB is still available for the OS to do its work which in this case is more than sufficient. The gathering of OS and IO statistics is done via the Performance Monitor and has shown that even at peak usage the OS itself requires no more than between 1.5 and 1.8 GB. That means around 10GB is available as a buffer for the unforeseen.

    You have gathered by now that here we are not talking about a mammoth database machine but rather a somewhat smaller (but nonetheless important) server.

    What happens when I need a new instance?

    Obviously the first thing I do is check the resources available to find out if the new database can reside in an existing instance or whether it needs a new one. Tests previously conducted give me a guideline as to how much memory this new instance will need. I use my guideline of leaving 2GB available for the OS and determine how much memory to allocate the new instance. A period of time is chosen to monitor database activity and resource usage and this figure adjusted accordingly to ensure that the instance isn't starved or, just as importantly, not too much memory is allocated to prevent wastage. Keeping an eye on the performance monitor and memory usage helps me to gain an insight into exactly what the machine is doing at any one time.

    Now I haven't seen the formula you have mentioned (and obviously I don't expect to see it posted here!) but I find it a little difficult to believe it isn't a Rule of Thumb formula. There are quite simply far too many factors to consider where implementing database optimisation is involved and I believe (and have always believed) that watching the database work is better than "fire and forget", something we have all seen too often.

    Fortunately I am in a position to add memory to the server should it be required (4 banks are still available) and I see this machine being in use for a good while yet, a position not many are fortunate enough to be in.

    So in summary, and very much imho, a little leg work, an understanding of the database environment and good past statistics make for a better start point in the creation of a new instance and memory management than a generic formula.

    I am not trying to create a hostile thread in saying all this, but I am trying to understand why some people make the choices they do. It makes for interesting reading!

    Nonetheless, the book has picqued my interest.....now I'm off to Amazon...... 😀

  • How is your guideline of leaving 2GB for the OS and monitoring accordingly different from Jonathan's guideline of leaving an amount of memory dependant on the total server memory and then monitoring accordingly, in terms of one being a bad Rule of Thumb and one not?

    It's far from a secret formula, variations are on a variety of blog posts, including Jonathan's.

    http://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/

    http://www.sqlskills.com/blogs/jonathan/wow-an-online-calculator-to-misconfigure-your-sql-server-memory/

    http://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

    http://www.brentozar.com/archive/2012/11/how-to-set-sql-server-max-memory-for-vmware/

    Yes, they're all slightly different, all just starting points from which you monitor memory and adjust accordingly.

    btw, leaving just 2GB of memory for the OS on a 64GB server is risking memory pressure, potential paging of SQL to the swap file and possible server instability depending what the min server memory settings are and whether locked pages is enabled.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • kevaburg (3/18/2013)


    Nonetheless, the book has picqued my interest.....now I'm off to Amazon...... 😀

    You can get a free copy in PDF format from Redgate's website if you want to save money. I don't get any profits from the PDF, it's just a free resource for the community.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Jonathon says assign memory dependent on the total amount of memory present in the server. In my environment that would lead to an immense amount of wastage if the relevant calculation dictates I require 12GB from my 64GB physical memory.

    I know you mentioned memory pressure with my settings but then that is why I size the way I do and that is the biggest difference between our approaches.

    I start with a server with SQL Server on it. I determine that 2GB will be reserved for the OS which with 64GB installed leaves me with 62GB. After load testing each database in the test and development phase I decide that 10GB will suffice for the instance in which the database resides. That leaves a very comfortable 52GB of unallocated memory just in case the OS requires it. I will see if it is required by checking the Performance Monitor to see if the OS needs more outside of this 2GB.

    I also maintain an environment with multiple instances on a single box due to varying Collations, options and authentication requirements. The next instance is then installed with (for arguments sake) 15GB allocated. I still have 27GB of unallocated space for the OS to use if it is necessary. By keeping an eye on the OS' requirements I can adjust the initial 2GB allocation upward and use space as required rather than allocate it in the beginning and see if it gets filled.

    Bearing in mind I had to design this system based on databases used maybe once or twice a year, or others constantly at 90% utilisation, memory became something that needed to be babysat and conserved until required which is where monitoring became so important.

    That was the reason I used the term "fire and forget" in my description of the allocation of memory based on what is installed physically. Using my approach there is always room on top for applications, drivers or additional databases/instances. I don't suffer from memory pressure because there is enough unallocated to take the excess because I don't specifically allocate memory to the OS. SCOM tells me when memory requirements exceeds my allocated limit for the OS at which point I reduce on paper the amount of memory available for other instances/databases.

    Up until now I have had a great deal of success with this approach and perhaps the best utilisation of resources I have experienced. It requires a much more watchful eye from myself as an administrator but RoI has been better than I had expected and there are no complaints from the end-users.

    I think it really does depend on the environment and how comfortable one feels making such adjustments but I feel vindicated in my point of view purely by the success of the particular installation I am using here as an example.

    Now to find that download....... 😛

  • If you suspect that you may be cutting it too close memory-wise, it might be a good idea to capture some perfcounters regularly (say every few minutes at least) and check afterwards how they vary on a 24-hour basis. Memory (Free Mb) might be a good one here.

    You don't want the OS to run out of RAM and be paging under certain conditions at night (for example).

    Also if you happen to use Fusion-IO style PCI cards, best to allow them free memory on top of everything else.:cool:

    Cheers,

    JohnA

    MCM: SQL2008

  • But that has been my Point from the start: At which Point are we cutting it too close? Is it when we regularly peak out at 99% or is it 85%.

    Regardless of what RDBMS I am using (and I work extensively with Oracle as well) my aim is 99% Memory utilisation for the instance.

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply