April 6, 2017 at 4:34 am
Hi Folks,
we have a database that experiences a great deal of INSERT, UPDATE and DELETE activity. The database I am talking about has also been written about in this post: https://www.sqlservercentral.com/Forums/1869313/SQL-Server-Disk-Latency
What I am experiencing with this database is what can only be described as a sawtooth effect in PLE values, and that at relatively regular intervals. Over the Course of around 1-2 hours the PLE value increases until it reaches a critical value and then it drops instantly to Zero before gradually increasing in value again.
What I don't understand is what causes this dramtic drop.
My theory is this: Because of the amount of DML activity in the database, the parameter "Automatically Update Statistics = true" triggers an update of the statistics albeit sampled. Because of the amount of active indexes in the database this becomes a very Memory- and CPU-intensive operation that eats all the memory available until the process is completed. After that the PLE rises again until the next statistics update is triggered.
Does that sound feasible?
We have checked for running jobs at these times, long-running queries, sub-optimally written queries (there are a lot of them!) and anything that at the time of the drop could be very Memory-intensive. Does anyone else have any ideas?
Many thanks and I look Forward to your comments!
Regards,
Kev
April 6, 2017 at 6:10 am
From what's in your other post, I'd reiterate what I said there - sort your indexing out first and see how things change. Particularly the heaps.
Things to check:
Appropriate fill factors on indexes.
Server has enough RAM & Max Mem setting is configured to a suitable value. Single instance or multiple instances?
Any applications other than SQL running on the server?
April 6, 2017 at 6:20 am
There are no other applications running on the server and the storage space is dedicated to the SQL Server.
RAM is not the issue. There is around 20GB available to the operating system so no starvation there either.
The problem is that the solution will be the equivalent of open heart surgery and so a very clear, proveable and 5-sigma result set is important before we implement.
April 6, 2017 at 6:49 am
That's possible, but are your stats really auto-updated so frequently? I think it's more likely that you have a badly written query that's reading the whole of a large table from disk. If you happen to have a 20GB table, that's everything in your cache flushed out in one pop.
John
April 6, 2017 at 6:56 am
John Mitchell-245523 - Thursday, April 6, 2017 6:49 AMThat's possible, but are your stats really auto-updated so frequently? I think it's more likely that you have a badly written query that's reading the whole of a large table from disk. If you happen to have a 20GB table, that's everything in your cache flushed out in one pop.John
That is a really good point. I have noticed a couple of full table scans in the execution plans and it is definately a start point!
Many thanks John. I'll post back when I have a few more answers.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply