After running sp_blitzcache (over time) I have found that everyday my plan cache is compiling new execution plans. 100% of my plan cache is built within the last 24 hours (each day).
I think these index rebuilds/reorganizes are causing this. I have found this job running every single night (I will switch it to once a week).
My question is; with these variables being commented out and not being defined in this script, won't this run on every index without any parameters? I think the previous DBA's intentions were to run the rebuilds and reorganizes when fragmentation was over 5% (for reorganize) and over 30% (for rebuilds), and I think this is just running it without parameters, is that the case?
--sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[IndexOptimize] @databases = 'USER_DATABASES', @FragmentationLow = NULL,
--@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
--@FragmentationLevel1 = 5,@FragmentationLevel2 = 30,@LogToTable = 'Y'" -b
USE master
GO
EXECUTE dbo.IndexOptimize
@databases = 'USER_DATABASES', @FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,@FragmentationLevel2 = 30,@LogToTable = 'Y'
September 3, 2021 at 9:46 pm
this is executing Ola Hallagren scripts - commented out code was for version 2008 or lower, the uncommented can be used for higher versions.
it is setup to use nearly the default values - not ideal and should be changed according to your databases needs - not all are the same most times.
before disabling it/ moving to weekly make sure it is not required on some tables - you may get a nasty surprise if a set of tables require that rebuild to happen daily.
you can look at for full explanation of that proc at https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html - and do look at the script itself to see its defaults - I always change them when I put them on any server - and as I said some databases are special and may require particular settings on a per index basis.
September 4, 2021 at 12:07 am
@fred Thanks, I will look into the details of the indexes that need it, I guess this would be due to excessive inserts that would cause daily fragmentation to be above 30% and need a rebuild.
My main concern however is my plan cache rebuilding itself (100% within the last 24 hours, and this happens each day). I'm thinking this script running on all tables in all database could be resetting statistics on the indexes so the execution plans are being recompiled?
September 7, 2021 at 5:52 am
Any index that is rebuilt also rebuilds statistics which will cause recompiles for any stored procedures that use the indexes.
Also, you should stop using REORGANIZE. It doesn't work the way most people think it does and it has the really nasty "feature" of removing all free space from and index without making any room above the fill factor and that perpetuates fragmentation.
Also, if you're doing any index maintenance on any fragmented indexes that have a 0 or 100 fill factor, you're guaranteeing rapid fragmentation whether you're using REBUILD or REORGANIZE (and you REALLY DO need to stop using REORGANIZE). Contrary to popular belief, the almost universally accepted "Best Practice" of using REORGANIZE between 5 an 30% logical fragmentation and using REBUILD > 30% is actually a "WORST PRACTICE" that has been perpetuated by some unfortunate wording in the old version of books online prior the changes MS made in the documentation on 20 April 2021.
It's better to do no index maintenance than it is to do it wrong and if you're using the 5-30% rule to use REORGANIZE, you're doing it wrong... REAL WRONG!
Here's a link to a 'tube to get you started on that subject. Ignore the fact that it seems to revolve mostly are the use of Random GUIDs. It applies to almost all indexes... especially where the speak proves what REORGANIZE actually does and why it shouldn't be used in such a panacea fashion.
And, yeah... I personally know the speaker and can vouch for him. 😉
https://www.youtube.com/watch?v=qfQtY17bPQ4
--Jeff Moden
Change is inevitable... Change for the better is not.
September 7, 2021 at 7:47 pm
Thanks Jeff -- I appreciate your insight. I am re-evaluating a lot of things the former DBA setup, and I have heard this from more than one good source that it is not healthy.
My main concern (at the moment) is trying to figure out what ALL stats are refreshed on my SQL instance. This past weekend I disabled the jobs that runs the rebuilds/reorganize to see if it was the culprit -- still there. When I run sp_BlitzCache I get this every single day;
You have 14182 total plans in your cache, with 100.00% plans created in the past 24 hours, 98.00% created in the past 4 hours, and 98.00% created in the past 1 hour.
I also get, right underneath that, Forced Serialization, Compilation Timeout, and Plan Warnings which might point to bad queries by developers.
I have Brent Ozar's list of possibilities, but not quite sure how to check for them, any thoughts?
Someone restarted the server recently NO
Someone ran DBCC FREEPROCCACHE NO
Somebody’s addicted to rebuilding indexes and updating stats (which invalidates plans for all affected objects) NO
The server’s under extreme memory pressure
The developers aren’t parameterizing their queries
The app has an old version of NHibernate with the parameterization bug
The .NET app calls .Parameters.Add without setting the parameter size
September 7, 2021 at 9:54 pm
Ola's utility - and many procedures that were built to do the same thing - will have a post-step after an index reorganize that updates the statistics. So either a rebuild or reorganize will also update the statistics.
If the statistics have been updated - the first time a query is executed against that table a new plan will be generated.
The real question you need to ask: Is this causing an issue? Normally it takes little time to generate a plan and has little to no effect on overall performance. Instead of focusing on this as the issue, you should be focused on what indexes need to be rebuilt - and how often. If you have indexes being selected every day - it is possible the reason for that is an incorrect fill factor or incorrect clustered index. You really should be looking at that instead of trying to fix the plan cache.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 8, 2021 at 3:27 am
That's an interesting point. Sp_BlitzCache told me a top priority was plans being built daily, but our server is not really taxed. I will check the indexes as well. Do you normally leave your fill factor lower, maybe 70%, for tables that have a lot of inserts going on? (I realize this is a huge topic in itself).
September 8, 2021 at 5:49 am
That's an interesting point. Sp_BlitzCache told me a top priority was plans being built daily, but our server is not really taxed. I will check the indexes as well. Do you normally leave your fill factor lower, maybe 70%, for tables that have a lot of inserts going on? (I realize this is a huge topic in itself).
If the key on the clustered index is ever-increasing, the answer is a resounding "No".
--Jeff Moden
Change is inevitable... Change for the better is not.
You have 14182 total plans in your cache, with 100.00% plans created in the past 24 hours, 98.00% created in the past 4 hours, and 98.00% created in the past 1 hour.
That sounds like a couple of serious problems. The thing I'd be looking for is what is causing the plan cache to hurl in such a brief time and then I'd evaluate the number of queries.
Just to ask the question, what's the largest seriously used index, how much RAM is allocated to the server (VM I assume) and how many SQL Server instances are on the box? Also, any monitoring services that might be clearing the cache for "measuring purposes"?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 8, 2021 at 1:24 pm
It is a single instance on the VM box, but I have 180 small databases with 2 TB total datafiles. Only 5 or 6 databases should be very busy (my estimate, haven't measured this yet), and the rest not so busy. I have perf mon taking 5 minute samples twice a day. Perhaps I will disable the perf mon samples (baselining) and see if that resolves. Today I'll learn how to measure the index usage, but I'm leaning away from a few nasty indexes because the plans are 100% being flushed, so if it were a few indexes only they should not cause the whole instance to have statistics amnesia. (Unless of course those nasty ones are hogging alll the memory) --- it has a good amount; 32 GB of RAM and on a busy day it only reaches 70%, the max memory is set to 24 GB. Since developers have had free reign on this server, I'm thinking it could be queries without parameters causing this (?). However, it still shouldn't be 100% dumping the cache, so I think this might be an instance related issue. (Editing for typos).
September 8, 2021 at 1:55 pm
So, to ask the question, have you stopping the nightly index runs even for just one night to help you make an assessment?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 8, 2021 at 3:42 pm
Yes, stopped the Index maintenance job for 2 days with no change in results.
September 8, 2021 at 3:58 pm
I did find an sp_updatestats command that runs every night - but it is pointed to a single database (thought I had found the issue for a second). After tonight (disabled perf mon for tonight and will see how that goes) I will then disable this just to be sure it's only effecting the one database.
May 2, 2024 at 12:36 am
Hi Steve: I am facing a similar issue on one of our databases. 100 % new plans in the last hour. I was wondering if you ever found the cause of your issue.
We do index maintenance once a week generally (exception for a few big indexes , 2x per week).
We do have a monitoring service in place that measures CPU, diskspace and also performance such as what procedures get call most often and which have the longest average execution time (for example).
----------------------------------------------------
May 2, 2024 at 3:32 am
Hello Orange - yes I did. There were tons of queries that were not parameterized. Developers for years had structured their apps to pass literals into the queries rather than have user input captured in a variable. As you might know, literals do not use cached plans, but create a new plan for each query just like using option recompile. A parameterized query will reuse a plan. I discovered this by getting familiar with sp_blitz package --- it is one of the basic stats from those sprocs. The developers started to parameterize (going forward anyway) their queries and it slowly began to fade.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply