May 19, 2011 at 10:57 am
We've run into a few spots lately where we see nebulous really really lousy performance on environments after they are restored to non-prod from prod, migrated between servers or moved around.
we can see a query, isolate the tables/procs involved, and run an sp_recompile against all with no benefit.
but dbcc freeproccache has an instant and huge impact.
it seems if we have forced a recompile on the related objects, a specific query should improve, and hitting the whole instance should not be necessary.
any thoughts?
May 19, 2011 at 1:19 pm
dbcc freeproccache is ok to use in Dev but it's not going to help you in production.
You need to figure out what's being populated in your cache.
May 19, 2011 at 1:42 pm
Alex,
thx for the response and link.
as I understand it, forcing a recompile on a table or proc should flush query plans associated with that object. The scenario we've dealt with is newly cached objects for a database that has 'just arrived' on a server (by restore or attachdb) and queries that run horribly up front.
Had a situation today where a particular query ran 14 minutes, the plans didnt look terribly different before and after (though I didnt save them and wish I had). statistics should persist with the database as its moved. Regardless, I updated them, and forced a recompile on all involved tables and the query still sucked.
ran dbcc freeproccache and it went from 14 minutes to < 2 seconds.
Weve seen this scenario a few times in recent months. First time it was solved with a server restart (which was convenient in that scneario, as we were in a maintenance period).
May 19, 2011 at 2:41 pm
That's probably because there's no data in cache and the server must read it from disk. So the first run is aweful and then it gets 100 times better as the data is accessed from ram.
Try re-running the query without restore or reboot, but after running freeproccache and dropcleanbuffers. That should replicate the bug.
May 19, 2011 at 2:43 pm
Ninja, thx for the reply. we can see the query eventually complete, which should cache it (thought I Havent verified that).
its not just the first run of a query that sucks. its every run of a query or proc in these instances. with no difference in supplied parameters in the case of SPs.
May 19, 2011 at 2:51 pm
Then please post both plans (the correct prod one and the incorrect dev one).
Are prod and dev exactly at the same level in the sps and cus?
May 20, 2011 at 6:28 am
But, the first time you move a database onto a new server, there's nothing of that database in cache. At best you're looking at memory pressure that is exposed when you start running a new set of procedures and they have to compete with resources with existing stuff in cache. That's a guess, but nothing else makes sense.
Is it possible that you're getting bad parameter sniffing? Are both runs of the query, before and afer the cache flush, using the same parameter? Are the execution plans the same?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 20, 2011 at 9:56 am
In the two scenarios I recall most closely of this, neither case could hvae been parameter sniffing.
in the first, the proc does not have any parms. its used for the product to identify the next widget to process.
in the case yesterday, we extracted an example of the sp_executesql cmd, and ran it interactively. once very poorly, and once very quickly, with a dbcc freeproccache in between.
thanks for the feedback folks!
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply