PLE drops but cache is not full, why?

  • Our server has 132GB RAM, with SQL Std 64-bit with min/max memory both set to 64000

    We have spotted at times (during day) that the PLE counter drops to below 300 (which I know is an old recommendation)

    The main DB is over 64GB in size but querying sys.dm_os_buffer_descriptors

    as in

    SELECT CAST(COUNT(*) * 8 / 1024.0 AS NUMERIC(10, 2)) AS CachedDataMB ,

    CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS DatabaseName

    FROM sys.dm_os_buffer_descriptors

    GROUP BY DB_NAME(database_id) , database_id

    ORDER BY 1 DESC

    This shows that only 15GB of data is cached for that DB, and about 22GB in total

    So how can the PLE be low but no data in cache?

    I would have thought that it being low meant that the cache was full, hence the page swap outs.

    We still have 40GB of RAM available, and I ran a few select * from large table which increased the sys.dm_os_buffer_descriptors count as expected and the PLE was still increasing, as expected as we have lots of RAM left.

    We now have an 'alert' defined to email when it drops to 300 so we can get a 'trend' for when this happens.

  • You are mistaken that only buffer pool takes up space in SQL Server's memory space. There are MANY other things that could be making 22GB the "cap" currently allowed for the buffer pool.

    Run the semi-documented dbcc memorystatus and review that against the online reference for it. Plan cache bloat is a likely culprit here.

    I also note that something else on the box could be starving SQL Server of even the half of available memory you are limiting it to. What else is running there.

    Oh, and depending on the build of SQL Server 2012 you have, there were several memory leaks fixed by SP/CUs. Check for those too.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • You should not set min and max memory to the same value - that can cause additional problems, as in the instance not being able to start at all.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Thanks for the info, I actually recently inherited this server so have started examining and spotted this curious PLE setting, and had spotted the min/max being the same, I've changed it now to min 4096, max 61440.

    The Windows server has 132GB and this is a dedicated machine for SQL, but as SQL Std 11.0.3128 can only access 64GB we do have 64GB not doing much right now which is why I guess they thought setting it to 64000 was a good idea.

    PLE right now is 21307 so once an alert fires to let me know its dropped to 300 or so, I'll run dbcc memorystatus and publish the results, if I cannot tell the reason from them.

    Although there is so many results here, not sure which section(s) are of the most interest

    thanks

  • Oh, in addition to the memory leaks, there was a VERY nasty bug that flushed memory. It was related to NUMA stuff. It was not patched until SP1 CU5 IIRC and you are on SP1+.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • PLE is only accurate for what you want to use it for after the cache is full. This is returning the average age of the pages in memory, so when you have a lot of new data coming in then the average age drops.

  • Actually, running my query at the start of this topic, and monitoring the PLE count, I did several select * from table which added another 5GG to the result and the PLE count kept increasing, which I would have expected as lots of space.

    I shall investigate the CU5 update.

    Now we have the alert set, we can see if any trends exist to try and understand a reason for this.

    Thanks

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply