August 24, 2007 at 1:34 pm
Hey everyone,
Last night I upgraded one of our production SQL 2000 servers to SQL 2005 and updated it to SP2a. After the upgrade and patching, I rebuilt all indexes and updated all statistics for every database on the server without issue. Now fast forward to this morning, one of the applications with its backend on this server started to experience timeouts, which I tracked back to a SP using a different execution plan than it had been using previously. As a temporary fix I am forcing it to use the old execution plan via OPTION(USE PLAN N”[Your Plan Here]’) command, but I don’t consider this a permanent solution. Ideally, the end goal is to remove the use of the forced execution plan and just have the query utilize an optimized plan (either the one it’s being forced to use now or an even more efficient one) automatically. Have any of you experienced this issue before? Any ideas on how to fix it so I can remove the forced use of an execution plan? Do any of you have any suggestions on how to prevent this from happening again with a different SP?
Thanks,
John
August 24, 2007 at 2:37 pm
When we upgraded our PeopleSoft Financial System database from SQL 2000 to SQL 2005, we had similiar issues. We initially used DTA to identify new indexes, and ended up adding 37 new indexes to the database. There has subsequently been released a "fix" that the PeopleSoft Administrator is testing in development, but we still have the indexes we created earlier still in production at this time.
August 27, 2007 at 6:57 am
I've ran the query through DTA and it gave me an estimated performance increase of 27% when my query was running around 1:18 (which puts the estimated time of completion around 56 seconds). I then forced the query to use the old execution plan and it now runs in the 5 second range (94% performance increase). Because of this, in my opinion, indexes are not the issue here. It's the optimizer coming up with an inefficient plan, and my goal is to get it to use the old plan (the one the query is forced to use) or a more efficient one automatically.
John
August 27, 2007 at 7:56 am
I have what could be a related problem. We have a frequently used stored procedure which used to be slow but I have since rewritten it. Profile shows reads of over 600,000 and an execute time of about 5 seconds for the old query and reads of about 10,000 and a 500ms execute time for the new version. For some reason SQL decides to go back to it's old habits from time to time and profiler shows numbers that would suggest it's using an old execution plan.
I can also get a situation when the query executed by the applicationtakes 6 seconds and run from sql managment studio using the same query paramaters (copy and paste from profiler) and same username takes 500ms.
When I issue an execute with recompile it sorts it for the current session and if I issue a DBCC FREEPROCCACHE it will sort it for all users but not permanently. This was a SQL2000 32 bit database which was migrated to a SQL2005 64bit server.
The sp is called by many different usernames (using sql authenication) and the application business layer (.net c#) is not using the .dbo prefix so I'm looking into to see if there is locking and perhaps timeouts on locks which forces the use of an older ineffecient plan. http://support.microsoft.com/kb/271509/
August 27, 2007 at 2:12 pm
Roger, thanks for the reply, but in my situation I do not believe this is the case as my stored procedure isn't called a great deal. Your issue may already have a resolution though, check out this KB article and see if it helps.
August 28, 2007 at 7:35 am
Well, I still don't know why the SQL 2005 upgrade caused such a drastic switch in execution plan generation, but to solve the problem (so we can again utilize automatic query plan execution) we added some new indexes and statistics to the database. We did this before, but due to the nuances of DTA, the first round of indexes weren't cutting it in terms of speed. After some more analysis we got the system back to the speed it used to be. Again, rather odd, but I guess for anyone else that deals with this in the future, look into adding additional indexes and statistics and if that still doesn't work, you can always force execution of a more optimized execution plan (so long as you know how to generate it).
John
August 29, 2007 at 5:09 am
If your procedure uses table variables in complex joins, that can do it.
August 29, 2007 at 7:35 am
No, it's not using any table variables. Complex joins, sub-queries, simple-parameters, yes, but no table variables or anything outside the normal bounds of a SP.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply