Big swings in page life expectancy

  • Hello,

    I have read that Page Life Expectancy (PLE) is an important counter to watch, and that ideally the average value should be at least 300 seconds. But I'm not quite sure how to interpret it beyond that.

    I've noticed some big swings in the PLE counter when I monitor one of my servers. For example, in one 5-second span, the value dropped from 11244 to 4084. The counter was at 11244 for about an hour, and now it has been at 4084 for about an hour.

    This particular couple of hours has been around the start of business (8:30 AM-10:30 AM), so there are no heavy maintenance jobs running, but it's possible that increase in traffic might cause this drop in PLE - I'm just not sure enough about interpreting the PLE counter to know for certain.

    At other times the PLE counter has dropped as low as 2, but it has also climbed as high as 14000.

    Is this typical or atypical of this counter? Does it indicate any problem with the server?

    Thanks for any help,

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • As the database begins filling up pages, yes, your PLE will drop. That is not a bad thing, Going below 300 is. One thing to look for are distributed queries that return large data sets from remote servers. This will fill up the buffers with this data (which it could not find locally). Then as local data is read in it will (hopefuly) reside in memory long enough to see PLE increase. But, each successive distributed query will flush that back out and PLE will drop again. This is common on OLAP and DSS systems. Other things to look at that can indicate memory pressure are SQL Buffer Memory:BUffer Cache Hit Ratio, Memory:Available MBytes, etc. Take a look at this site http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=64044 and check these counters for memory pressure.

    DAB

  • SQLServerLifer (3/20/2008)


    As the database begins filling up pages, yes, your PLE will drop. That is not a bad thing, Going below 300 is. One thing to look for are distributed queries that return large data sets from remote servers. This will fill up the buffers with this data (which it could not find locally). Then as local data is read in it will (hopefuly) reside in memory long enough to see PLE increase. But, each successive distributed query will flush that back out and PLE will drop again. This is common on OLAP and DSS systems. Other things to look at that can indicate memory pressure are SQL Buffer Memory:BUffer Cache Hit Ratio, Memory:Available MBytes, etc. Take a look at this site http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=64044 and check these counters for memory pressure.

    DAB

    Thanks for the info! So far, the buffer cache hit ratio is pretty good - it averages over 99% with only rare drops as low as 93%. And available MBytes also seems good. Although, the target server memory and total server memory are equal, which I think is a sign that more RAM is needed for the server, given that the other stats look OK.

    And I'm sure there are queries that can be optimized. I will run some profiling to see if there are any big offenders that might be putting a strain on an otherwise well-performing system.

    Thanks again,

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply