June 17, 2013 at 3:53 pm
I'd like to get an idea of what is normal when it comes to the page life expectancy. I was reading that on an average server the PLE is about 300 or 5 minutes. Some of my database servers have PLE of 2244, 74252 and 6707. should I worry about this hight numbers?
I'm using the query below to pull this information.
SELECT [object_name],
[counter_name],
[cntr_value]
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Manager%'
AND [counter_name] = 'Page life expectancy'
June 17, 2013 at 4:16 pm
The lower limit on that number is the subject of much speculation, I particularly like the number 300 for many servers.
Understanding what that number means is important, basically it is saying I think that the pages will stay in memory for this number of seconds. The higher the number the less likely a page is going to be needed to be flushed from the cache to make room for another page. This is good.
So the bigger the number the better. But don't stress it if all of a sudden the number drops down and starts going back up. You could have just issued a VERY BIG query that didn't need much of the data already cached, lots of reasons..
CEWII
June 18, 2013 at 2:53 am
There are some stories about 300sec.. I am told 300sec for every 4gb physical memory. Is it true?
June 18, 2013 at 3:02 am
HildaJ (6/17/2013)
I was reading that on an average server the PLE is about 300 or 5 minutes.
Nope. Completely wrong. Anyone who says PLE should be 300 doesn't understand what PLE is.
300, or 5 minutes means you're, on average, replacing the entire size buffer pool in 5 minutes and that is incredibly high buffer pool churn for any DB. The more memory, the worst the churn.
Take the amount of memory, take the PLE in seconds and from that you can work out the average MB/sec that you'll be driving the IO subsystem. Then see how much your IO subsystem can handle.
You want that counter as high as possible. If you want a limit, then 300*(memory/4GB) is kinda an acceptable minimum, as that will be driving IO load to incredibly high values.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 18, 2013 at 7:48 am
Ok, that sounds good. Putting in my own words to make sure I'm understanding, if my servers have a high PLE it's ok because their memory space is not being utilized to frequently and there's no need to flush it because no new pages are being created, the cache is being used. Now, if I see that number dropped and stay low, what kind of actions should I be taking in consideration? If it's low that means there's a lot of activity in the I/O. So should I be looking at programs/querys that are eating up the memory?
June 18, 2013 at 9:25 am
Just a minor clarification, your memory could be being used frequently BUT the pages that it needs to satisfy the queries are already in memory so it doesn't have to flush pages out to make space for what it needs.
CEWII
June 18, 2013 at 9:32 am
HildaJ (6/18/2013)
because their memory space is not being utilized to frequently and there's no need to flush it because no new pages are being created, the cache is being used.
Not quite. It means your cache is fairly stable, so pages read into cache are staying there and being used for a while.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply