February 15, 2012 at 7:43 am
Hi Guys,
I have read through alot of articles on PLE but still not able to understand fully. Hope u guys can help me on this.
Understand that that SQL server will cache as much data <page> as possible in the avaliable memory.
Read that SQL Server use Least Recently Used (LRU) for caching. For eg if there's a scenario if there is no more available physical memory, SQL Server will use LRU to page out cache data that is least recently used to make way for new data (that are not cache in memory). If my PLE is 300 seconds, what does this mean?
Does it means that every 300 seconds, it will flush out all the data?
300 secs is referring the estimated page life expectancy of every page in the cache? I tot the LRU will only flush out lru data to make way for new data? So for eg if the new data is about 5 pages, the lru data that is flush out should be only 5 page?
Think i'm confused by this PLE.
thanks
February 15, 2012 at 7:51 am
If PLE is 300 (5 min) it essentially means that the entire data cache is getting replaced in 5 minutes. Ok if you have maybe 1 GB memory, not OK with larger amounts of memory.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 15, 2012 at 7:01 pm
Hi,
I still don't quite understand on this.
==
If PLE is 300 (5 min) it essentially means that the entire data cache is getting replaced in 5 minutes.
==
Why does the whole data cache needs to be replaced? I tot only the least recently used data need to be pushed out when there's not avaliable memory and a need to cache new data?
Wouldn't it affect the cache hit to if the whole data get replaced be it the PLE to be 300 or greater?
thanks
February 15, 2012 at 7:44 pm
Something that drives PLE is the need to bring data into memory when queries scan large tables.
If you have several tables that are larger than available memory and you run queries often that cause them to be scanned, then the pages will have to be swapped in and out of memory rapidly, and the PLE will fall very low and stay there. This causes a real slowdown in server performance because queries have to wait till data is read from disk, and it tends to impact every query, not just the ones that scan large tables.
In many situations creating carefully chosen indexes can bring relief. When that is not possible, a fairly modest increase in memory can often make a real improvement.
February 15, 2012 at 11:37 pm
Hi,
thanks for the information.
I'm wondering how is PLE dervived.
For eg,i have 12 GB of cache. If the PLE is 300, the whole cache will be flush off? Mean nothing will be in the cache? Why do i need to flush the whole cache instead of only flushing off old page size that i need for the new data.
I have so many pages cache in memory, why the need to flush out all at once?
thanks
February 16, 2012 at 1:52 am
Nothing's flushed. Flushing the cache every 5 minutes would be silly.
PLE is the average time that a page will remain in cache before been removed for whatever reason. So if it's 300, then pages are only likely to remain for 5 minutes and hence you're essentially replacing the entire data cache, one page at a time, in 5 minutes.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 16, 2012 at 3:21 am
thanks gila for the information.
kindly correct if my understanding is correct.
just an example: for eg: my cache is very small and can only cache up to 10 pages. below is the pages:
m = most recent used
l = last recent used
MMMLLMMLLL
For eg, my PLE is 300. So does it means that one page of the above will flush out every 5 mins to make space for new page?
if the new cache request is 5 pages, 5 pages of the LRU will be flush out for new ones?
thanks
February 16, 2012 at 3:43 am
chewychewy (2/16/2012)
For eg, my PLE is 300. So does it means that one page of the above will flush out every 5 mins to make space for new page?
No, it means that on average each of those 10 pages will only remain in cache for 5 minutes before been evicted to make space for another. So, on average every 5 minutes 10 pages will have been evicted and replaced.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 16, 2012 at 6:27 am
thanks for your help/ appreciate it!
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply