May 18, 2010 at 1:34 pm
I had a junior DBA run DBCC DROPCLEANBUFFERS and FREEPROCCACHE on a Production environment bringing the application and queries down to a crawl. I knew this is strongly discouraged on production since it flushes the data out from buffers and needs to recreate plans on all queries. (I believe the first one impacted us the most). My question is - when i restarted the server, typically after applying patches or some such thing, I do not find such a performance lag. I am assuming the data and procedure caches get flushed during restart too, but why would the system not suffer a lag after reboot then? Am just curious to know.
Thank you.
May 18, 2010 at 4:19 pm
Well, I know that in my environment we only restart mission essential SQL instances either during unplanned outages to fix something completely broken or during approved downtimes for maintenance. In both instances user activity and therefore interruption is minimal, so when the server comes back up there are not many requests hitting the server.
Was dropcleanbuffers and freeproccache run during peak usage times? I would imagine if so then it has to contend with several requests that now have to wait for new query plans to be built and other operations to take effect before results can be handed back. Those operations multiplied by however many people are trying to use the database at once and I would imagine the server is going to be slower trying to effectively manage all of those requests.
Joie Andrew
"Since 1982"
May 18, 2010 at 9:14 pm
I would be interested as well to learn the difference of how SQL Server handles this because there are apparently many cases when whole procedure cache cleared (http://support.microsoft.com/kb/917828) and it doesn't affect performance so much as if you run DBCC manually.
May 19, 2010 at 7:09 am
Thank you Joie. Yes of course our downtimes are only when there is low user activity so we may not have experienced same volume of users as we have during peak times (when the dba ran the dbcc commands). But I tested one specific procedure on which we had issues after manually clearing cache and after reboot and there was a significant performance difference, the latter was faster. I have not tried to capture execution plans and such unfortunately to see what the difference was but it was different. Yes I would like to know how SQL handles it internally too.
June 6, 2011 at 3:50 am
Hi All,
Currently, i m also facing the same problem in my application. Following is the scenario.
The application is related to TMS domain. It contains shipments, purchase orders like that. Shipment contians number of Shipment Orders.
When i open a Shipment, it is going to show around 600 shipment orders related to that shipment. For that i wrote a stored procedure, which will return back to my java class as 30 resultsets. Each resultset contians many rows. For example, for 600 shipment orders shipment, the stored procedure is returning around 11800 records in 30 resultsets.
After that fetch those records, i m showing in the UI.
This is sometimes working and sometimes it is not working. It is mentioning like that SQL STATE IS null or 0. If i use the DBCC dropcleanbuffers COMMAND ON DB SIDE, it is fetching and showing records properly.
Can you please let me know, y? it is showing proper results after clearing the buffers in db. otherwise it is throwing an error?
The same procedure will cal daily around 1000 times. I think, it is performance effect. But why it is showing records aftr clear the buffers.
Thanks for your help....
October 10, 2011 at 2:22 pm
I would also like to know how/why rebooting doesn't seem to flush the cache. Thanks to the OP. 🙂
October 11, 2011 at 7:58 pm
If you clear the procedure cache actually makes the script running faster, it's very likely your db stats is out of dated so the exec plan SQL produced is not the best.
October 12, 2011 at 12:34 am
I flush the cache before performing performance reports when I'm interested in a particular period of the day (2 hours in the morning for instance). The reports show object execution statistics, total CPU, etc. since a Stored proc was first placed into the cache. Very rarely a stored proc execution plan goes bad as was mentioned due to stale statistics. Check your object execution statistics to see if there is now a rogue query. Sometimes it only takes one that is being frequently called to bring a server to its knees. An index scan instead of seek for instance on a large table.
October 12, 2011 at 9:10 pm
Execution plans go bad due to stale statistics. The statistics are then updated (is auto update enabled : http://www.sqlskills.com/blogs/kimberly/post/Auto-update-statistics-and-auto-create-statistics-should-you-leave-them-on-andor-turn-them-on.aspx) during certain moments in table records changes (addition of rows or updates/deletes) and the plan is invalidated and issued a recompile to create a new plan that is dependent on the object that has new statistics.
You can use some trace flags to look over some of the recompiles (http://support.microsoft.com/kb/195565). Might be of help if you have a server to play on.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply