August 25, 2010 at 10:50 am
Hi,
For the past 10 days one of our servers having many application databases is having problem with Low Page Life Expectancy. We have the Idera tool which is pointing to values between 50 to 200. I did some analysis and found that when a resource intensive operation runs like backup, indexing or some batch process the value of PLE falls from 10's of thousands to few hundreds. We have been running these processes before too and never had an issue with PLE with so much frequency.
Say the indexing process runs at 3 am, the PLE falls to couple of hundred. I expect it to bounce back into thousands during the day, but this is not happening.
I know this issue has been discussed a few times here, and I tired to figure out options like disabling the antivirus etc, but I am not able to come up with a proper conclusion.
Our config is: Windows server 2008, SQL server 2008, 8gig memory (6.5 gig allocated to sql server).
Asking more memory is an option I am considering. But I am not sure if this will solve the issue. One interesting thing I found was that even though sql server has 6.5 gig allocated, the total memory used on the server is 7.8 gig, which I feel is very high for the OS.
I would like to know what I should check to solve the problem.
Regards,
Apurva
August 25, 2010 at 11:59 am
There is no way to ever really prevent the cache from being bulldozed when you do a large operation like a reindex/huge report/etc, so you basically just have to live with it. SQL will cache that new data being pulled no matter what.
As to why it's not going back up...PLE should drop significantly when you do a large process like a backup/reindex, and then slowly climb back up. You said it regularly is around 10s of thousands...so if we go with 20,000, that is a bit over 5 hours. You can't realistically expect it to go back up to that number until 5 hours AFTER all the common data has been fetched for the day in the morning, which could take a few hours in itself. As long as your cache isn't being plowed out multiple times a day during business/traffic hours, you're probably fine. PLE > 600 typically means the server isn't stressed for memory.
Looking at your stats though, you should get more for that box. Remember the max memory you set for SQL is just for the cache. You have MTL on top of that, plus overhead from services, users logged in, system processes, extended SPs, paging...1.5gb left for the OS is simply not enough. I leave 4-6gb for the OS on a dedicated SQL server.
August 26, 2010 at 11:41 am
Thanks Derrick.
I made an observation, whenever the sql server memory usage reaches 100% the page life expectancy drops to couple of hundreds and stays there for a long time. Is this normal to expect when the sql server reaches 100% memory utilization or is there some setting that can help me control this behavior?
August 26, 2010 at 12:54 pm
I would leave the Max setting right where you have them. For the size of Ram you have you are at the recommended setting... if you had 32 gigs of ram I would say give the OS 4 gigs as suggested before.
You say 100% utilization which counter are you looking at to determine that.
total vs target memory maybe?
I would check you error log as well if this occuring while after indexes are rebuilt then you might have an error message saying Paged working set out of memory. This is an indication that the OS is need of memory and it has asked SQL for memory...In 2008 this is not as much as a problem as 2005. but its possible, If you see this then YES you need to give the os a little memory.
August 26, 2010 at 12:58 pm
here are the recommendation for memory settings...
August 26, 2010 at 1:02 pm
Thanks 456789psw.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply