August 4, 2010 at 8:44 pm
I was pointed at KB article 918483 (http://support.microsoft.com/kb/918483/e n-us) today, which mentions that setting Max Server Memory is unnecesssary when running SQL Server x64 on Windows Server 2008 with "Lock Pages In Memory" set.
Windows Server 2008 improves the contiguous memory allocation mechanism. This improvement lets Windows Server 2008 reduce the side effects of paging out the working set of applications when new memory requests arrive. If you are running SQL Server on a Windows Server 2008-based computer, you do not have to manually configure the max server memory property as specified in the "Important considerations before you assign the 'Lock pages in memory' user right for an instance of a 64-bit edition of SQL Server " section.
My initial reaction was skepticism, as SQL Server may not release sufficient memory quick enough to support the other process, causing paging to the swap file.
With a server running with this configuration, I have about 5-200 MB of physical memory free, and the total committed bytes on the server is greater than the amount of memory in the machine (about 600 MB over - not much, on a 32 GB machine). This is a situation I generally try to avoid, and one that's relatively easy to avoid by setting Max Server Memory to allow a bit of room.
In November last year, Glenn Berry made the following comment (http://www.sqlservercentral.com/blogs/glennberry/archive/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008.aspx):
I know that Microsoft sometimes says that you don't need to set MaxServerMemory when you are running on Windows Server 2008, but most DBAs that I know still do it. If nothing else, it is a "belt and suspenders" kind of thing.
In your experience, is this still the case? I'm still leaning towards leaving a few GB of memory. Sure, SQL Server's memory may not be aggressively trimmed, causing performance issues, but there will be an additional hit while the server scrambles for memory to run the other process (such as opening Perfmon, or SSMS, etc), and it may be the non-buffer pool part of SQL Server that suffers.
All comments welcomed!
August 6, 2010 at 1:11 am
When you enable lock pages in memory SQL Server will not release in memory to OS, and while starting it will allocate almost all the physical memory, to avoide we need to cap SQL Server memory with MAX & MIN server memory, This will make the memory utilization properly.
"More Green More Oxygen !! Plant a tree today"
August 6, 2010 at 6:31 am
I got a more conservative, informal recommendation from a former coworker who now works for Microsoft and does scalability testing for SQL Server. For 64-bit SQL Server she recommended leaving 2Gb free for every 16Gb of physical memory on the OS.
I thought this seemed a tad excessive if one got into situations with a higher amount of physical memory, but a recent situation proved me wrong. This was a PeopleSoft implementation, only SQL Server and typical OS services running on the DB server hardware, and 64Gb of physical memory on th DB server hardware. SQL Server was initially allocated 62Gb of the 64Gb via the Max Server Memory setting. Multiple people got called together for some performance issues, and an OS guy said the OS appeared to be memory-constrained. So we reduced SQL Server's Max Server Memory to 56Gb. The next day we still had some problems to resolve, but a lot of things were generally better. Moreover, memory use for services not related to SQL Server had jumped up to almost 6Gb.
Some may howl at this conservative rule of thumb. I'm just calling out a real-world situation which I recently observed. It may be better to start a little lower with Max Server Memory until one knows they are not impeding the OS and other services.
Peter
http://seattleworks.com including my blog The SQL Janitor
August 9, 2010 at 9:47 am
As a thumb rule , on a dedicated SQL server we allocate 75% of the total memory to SQL and the rest 25% would be taken by the OS.
August 9, 2010 at 8:29 pm
Thanks for all the answers, both here and privately.
From initial testing, it does appear that SQL Server on Windows 2008 manages the memory pretty well - I fired up SSMS on the server and started SELECT * FROM BIGTABLE, and it handled itself well. Interestingly, after 500 MB, the buffer pool didn't shrink, but the sqlservr.exe process and the antivirus trimmed down about 150 MB each.
That said, however, as a conservative DBA, I feel it's a lot safer to ensure that there's sufficient memory left over and available by setting Max Server Memory. I'd rather potentially waste 1-2 GB of memory for a more stable system. Perhaps I'll change my tune in a couple of years, once there's more proof of stability. Of course SSDs may well be the norm by then, and it will hardly matter if the page file is on an SSD.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply