exec dbcc freeproccache only solution?

  • Daily we have times of very poor performance. We tracked down to some stored procedures(we are in the process of optimizing them)using a lot of our CPU that users execute from time to time. Processor runs between 60-100%

    then we execute the command to clear the cache:dbcc freeproccache and DBCC DROPCLEANBuffers seems to solve this performance degradation, cpu processor goes down to 30-40%. (this is in our production environment)

    I read that is not good to run this command in a production environment, but this seems to only solution for our performance issues from time to time, why is it bad to run this in an prod environment? also, is there any alternatives?

  • I have similar problem, posted in more detail at . We've tried a few different DBCCs like this, and were concerned about negative performance impact of clearing out cache. So far we've not been able to quantify any negative impact--it has been absolutely negligible.

  • These DBCC commands remove all plans from the cache and chances are that this is simply a single (or maybe a few) bad plan(s) so if you know the name of the stored procedure, you can simply recompile the proc(s) in question (e.g. Exec sp_recompile ‘usp_proc_name’)

  • Just a suggestion 🙂

    If we use DBCC FREEPROCCACHE, it would clear the execution plan from cache globally, i.e remove all elements from procedure cache.

    We could see the contents of procedure cache using below DMV's

    SELECT bucketid, (SELECT Text FROM sys.dm_exec_sql_text(plan_handle)) AS

    SQLStatement, usecounts

    ,size_in_bytes, refcounts

    FROM sys.dm_exec_cached_plans

    WHERE cacheobjtype = 'Compiled Plan'

    AND objtype = 'proc'

    For flushing procedure cache for a particular databases, we need to use below command

    DBCC FLUSHPROCINDB(<db_id>)

    After running this command, we could use the DMVs again and see that the row count would have

    reduced as the plans for the particular database would have got deleted.

    It is better not to run DBCC FREEPROCCACHE as it would clear the entire cache (i.e even for the databases that are having efficient query plans in cache).

    M&M

  • I hear you guys, but it seems like you're making recommendations from an ivory tower. Are you speaking from experience with this precise problem? Or just general knowledge?

    When your system is effectively down because of this problem, and users are unhappy, there is a lot of pressure to resolve the issue vs. investigate it. Add this to the fact that the bigger DBCC appears to have no discernable negative impact, and I cannot agree with your recommendation (unless someone already knows precisely how to scope down their DBCC to address the problem). If the DBCCs caused any pain it would be another story.

  • Hey you are right. It is tough to wade through this one. Even experienced guys are wary to give their sign off if DBCC freeprocache is absolutely harmless or not (i.e on a production environment). On some instances, there would be a lot of user databases and it would be better if we don't clean up the procedure cache completely. yes/no?

    It would be always better to go the root cause of the problem i.e to identify the offending queries or something and do some tuning rather than freeing up the cache.

    M&M

  • Hey Mike, I was speaking more toward the original post where they stated that they knew the procs that were causing the issue. In your case, I agree that at this point, if clearing the entire cache isn’t causing noticeable impact than by all means, flush everything- I know I do it when the server is unresponsive and things are pointing to a bad plan and I am sure that most every DBA has as well. However, I would either get some sort of 3rd party monitoring tool in place (like Ignite, etc.) or start setting up alerts triggered on an increased number of reads for a proc so query sys.dm_exec_query_stats and store the values for the procs found throughout the day, then if you query it and find that reads are out of line (say 10x the average of the last 24 hours) then throw an alert. At the very least you would have some historical data that you can refer back to in an attempt to track down which procs are getting the bad plans which you can then use to see why (bad stats, mass data changes, etc.).

  • Well spoken, good points. I appreciate your input. But I think you're assuming that the cause of the problem is a "bad plan" in the classical sense of that term. I'm not convinced that's the problem.

    We're used to dealing with poor query plans, and I'd like to think if that were the problem we wouldn't be posting here. Our data distribution doesn't change radically over time, and we rarely see plans change from good to bad on their own very often. But it happens, and when it does these cases have proven to be readily identifiable. And in my experience, they never cause such catastrophic symptoms.

    Anyway...initially that is how we attempted to debug this problem. Freeing cache was a last resort when our normal methodology failed to find a specific problem.

    When we resort to freeing cache, I think we're probably treating the symptom of a SQL server bug. If I had Ignite in place when this occurred, might be able to help prove it one way or the other, but I don't. Hopefully soon.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply