Page Life Expectancy Extremely high

  • Hi,

    I just checked on a server the page life expectancy and the value I got was 45772!!!

    The server only has 17G of RAM... According to Paul Randal's "recommendation" the PLE should be SUM(Buffer Memory) G / 4G * 300...

    Why is the server keeping the pages so long in memory?

    There are 160 active connections working on different databases processing data so it's not a problem of "data replacement"...

    However the machine has some IO issues (slow writes) and an average of 14 pending IO tasks...

    Thanks,

    Pedro



    If you need to work better, try working less...

  • Why you think a long page life expactency is bad 😀 ?

    This just means that your pages reside in memory for almost 13 days...wow...great value!

    Greetz
    Query Shepherd

  • The value is changing all the time...

    I take snapshots from some metrics and PLE had values from 423423 to 5240.

    Now is at 5625 and 30min ago it was 45230...

    Is this change usual?! Can this mean there's a problem with the configuration or other stuff?

    Thanks,

    Pedro



    If you need to work better, try working less...

  • Hm...I would say : If you got a lot of activity PLE is getting low(er) and if you got less activity the PLE is increasing, because the pages stay longer in cache.

    Greetz
    Query Shepherd

  • Is there a way of "forcing" PLE to a max value, just like sql server max memory?!

    Thks,

    Pedro



    If you need to work better, try working less...

  • PiMané (2/7/2013)


    Is there a way of "forcing" PLE to a max value, just like sql server max memory?!

    Thks,

    Pedro

    Why would you ever want to force pages out of cache in order to meet an arbitrary Maximum PLE value?

    Paul Randall's suggestion is a target minimum value

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • That calculation is for a minimum healthy value but even that is to be measured against your server over time. But the biggest key word there is: minimum. A higher value is great. The higher the better. It means your pages are staying in cache and most of your data access is from memory. That's GREAT! Don't try to make it lower.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • If PLE is "high" then it means that there's no memory pressure on the system, right?

    I'm analyzing the server and there's lot of CPU and IO pressure but it PLE is high then there's no memory pressure...

    Thanks,

    Pedro



    If you need to work better, try working less...

  • Exactly! To sum it up : Page Life Expectancy Extremely high is not a problem...it's a great!

    Greetz
    Query Shepherd

  • Grant Fritchey (2/7/2013)


    That calculation is for a minimum healthy value but even that is to be measured against your server over time. But the biggest key word there is: minimum. A higher value is great. The higher the better. It means your pages are staying in cache and most of your data access is from memory. That's GREAT! Don't try to make it lower.

    Thanks Grant.

    But is it usual to have a PLE with lots of changes from minute to minute?! shouldn't it at least stay "stable" for some time?!?

    Pedro



    If you need to work better, try working less...

  • It's an average value. If you load big data amounts in the cache the value can be unsteady.

    How about your indexes?

    Greetz
    Query Shepherd

  • PiMané (2/7/2013)


    Grant Fritchey (2/7/2013)


    That calculation is for a minimum healthy value but even that is to be measured against your server over time. But the biggest key word there is: minimum. A higher value is great. The higher the better. It means your pages are staying in cache and most of your data access is from memory. That's GREAT! Don't try to make it lower.

    Thanks Grant.

    But is it usual to have a PLE with lots of changes from minute to minute?! shouldn't it at least stay "stable" for some time?!?

    Pedro

    No. Because it's a measure of how long your pages are in memory if you take it now it might be 42. If you take it in another 10 seconds and everything has stayed in memory, then it's adding ten seconds to that value. Take it agan in 10 seconds and if nothing has changed, it's 10 seconds higher again, etc., etc.. This is not a measure that you look at once, it's a measure that you look at over time and compare one day to a previous or something like that.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Paul Randall

    Most new systems today use NUMA, and so the buffer pool is split up and managed per NUMA node, with each NUMA node getting it’s own lazy writer thread, managing it’s own buffer free list, and dealing with node-local memory allocations. Think of each of these as a mini buffer pool.

    The Buffer Manager:Page Life Expectancy counter is calculated by adding the PLE of each mini buffer pool and then getting the average.

    What does this mean? It means that the overall PLE is not giving you a true sense of what is happening on your machine as one NUMA node could be under memory pressure but the *overall* PLE would only dip slightly. One of my friends who’s a Premier Field Engineer and MCM just had this situation today, which prompted this blog post. The conundrum was how can there be 100+ lazy writes/sec occurring when overall PLE is relatively static – and this was the issue.

    For instance, for a machine with 8 NUMA nodes, with PLE of each being 4000, the overall PLE is 4000. If one of them drops to 1000, the overall PLE only drops to 3625, which likely wouldn’t trigger your alerting as it hasn’t even dropped 10%.

    On NUMA machines, you need to be looking at the Buffer Node:Page Life Expectancy counters for all NUMA nodes otherwise you’re not getting an accurate view of buffer pool memory pressure and so could be missing performance issues. And adjust Jonathan’s threshold value according to the number of NUMA nodes you have.

    You can see the lazywriter activity for each NUMA node by looking for the lazywriter threads in sys.dm_exec_requests.

    I think the topic can be closed!

    Greetz
    Query Shepherd

  • PiMané (2/7/2013)


    Grant Fritchey (2/7/2013)


    That calculation is for a minimum healthy value but even that is to be measured against your server over time. But the biggest key word there is: minimum. A higher value is great. The higher the better. It means your pages are staying in cache and most of your data access is from memory. That's GREAT! Don't try to make it lower.

    Thanks Grant.

    But is it usual to have a PLE with lots of changes from minute to minute?! shouldn't it at least stay "stable" for some time?!?

    Pedro

    Just to add to what has been said already with an eye towards your question of stability of the number. NUMA hardware aside because you'll be adjusting your monitoring accordingly, sudden large-scale drops in PLE are not a good thing. It means SQL Server either naturally had to replace pages in the buffer pool with new pages from disk to satisfy a query or Windows imposed on SQL Server to give some memory back. In the former case you should be looking at queries that do a lot of I/O to see if they can be tuned to be less aggressive with what they are reading. In the latter you should look at your max server memory setting and explore enabling LPIM (lock pages in memory). In both cases you would be experiencing memory pressure so unless you can tune those aggressive queries then adding memory or eliminating programs Windows is hosting besides SQL Server is the next alternative.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply