memory problems

  • hello. any help really apprectiated with my problem....

    My idera diagnostics software says that theres critical memory problems (total used 99.2%) with 2 of my instances on a cluster.

    I have a 2 node cluster, both nodes have 128G of memory. Currently theres an instance of 2008 and an instance of 2005 SP4 both running on the same node (both enterprise edition). The max server memory is set to 90GB on 2008 and 30GB on 2005. Everything is 64-bit. Lock pages in memory is not set.

    I've used the DMV sys.dm_exec_cached_plans to check the plan cache size of the 2008 instance and its 8.2GB of which 8.1GB is taken up by pages greater than 8k. My understanding is that this memory will be taken from the OS rather than sql servers buffer pool. Adding this to the max server memory values would explain why idera is raising critical memory problems but....

    I cant find any sign of memory problems on the server/instances. there are no messages in the server application or system log or the sql server error log. The plan cache is not being trimmed and the buffer pool remains at a constant high value. Can anyone help in explaining why this may be?

    I realise failing over one of the instances or reducing the max server memory would solve the problem but that isnt possible at the moment.

  • If SQL is using too much memory, reduce the max server memory setting a little.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • thanks gail. unfortunately, management *boo hiss* want to know why before solving the problem.

  • iainthepitman (2/16/2012)


    I cant find any sign of memory problems on the server/instances.

    Then, apparently, you don't have memory issues. How did you check memory pressure? Which counters did you account for?

    Could diagnostic manager be possibly wrong?

    -- Gianluca Sartori

  • High memory usage (which is what Idera is picking up) does not automatically mean memory pressure. It just means that a lot of memory is in use. That's generally a good thing, free memory is resources going to waste.

    That said, if the usage is too high (<300MB on available memory in MB perfmon counter) it may be a good idea to reduce SQL's buffer pool size to ensure that there is enough free for the OS.

    The sum of the max server memory for the two instances is 120GB. On a 128 GB server, that's a little high. I'd leave at least 12-16GB (maybe even as high as 20) for the OS and other memory allocations on a server that size.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • diagnostic manager could be wrong i suppose.

    I have checked quite a lot of counters. Initially, just adding the max server memory for the 2 instances plus the multi pages of the plan cache adds up to 128GB. I've also checked the working sets for both instances. I've checked the performance counters under instance name\buffer manager which indicate the buffer pool is pretty much full. I've checked sys.dm_os_buffer_descriptors for size of the buffer pool and sys.dm_exec_cached_plans for single and multi pages in the plan cache. For the 2008 instance i've checked sys.dm_os_memory_nodes which reports a much lower multi page value than i get from summing the plan cache (memory_nodes = 400MB, plan cache = 8.1GB). But the AvailableMB perfmon counter is saying there is 1GB available. Not sure how this is calculated.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply