Have to use force recompile to get the SP to use indexes

  • Hi,

    Occasionally we discover "heavy" (lots of reads and long duration) stored procedures in our production environment. Upon closer inspection we see that if the same SP is run directly from management studio it typically uses 5-12 reads, while when it is run by the application it uses maybe 1,000,000 reads - so it is obviously doing a table scan and not using the indexes as it is supposed to.

    If we modify the SP and include WITH RECOMPILE it starts using the index again.

    Anyone have any idea or suggestion as to why this happen, and perhaps how this can be avoided?

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • My guess, and that is about all I can give since you haven't really provided a lot of info, would be one of two things: Parameter sniff or stale statistics.

    If you would like better answers, please read the second article I reference below in my signature block regarding performace problems.

    If you follow the instructions in that article, you will most likely get much better answers.

  • Thank you, a good article!

    Not a lot of it applies to my issue I think (I may be wrong as I don't know the exact cause of my problem) 🙂

    The table size and execution speed of the SP isn't very relevant as it isn't a performance problem per se - it works fine when it's actually using the index. And it has happened with three or four different SPs with no apparent connection over the last year or two. Including an execution plan is impossible as it only runs slow when executed by the application.

    I'll include some info, in case it can help in understanding the issue: The SP does a fairly straightforward select on date and two other columns, filtered by two subselects from another table. The main table contains maybe 14 million rows, while the subselects works on indexed keys on a table with maybe a million rows. When indexes are in use it consumes about 60-70 reads in total and uses no time at all to execute. It always uses the indexes if it is run manually from the query analyzer.

    When the application executes the SP with the exact same paramters it uses in excess of 1 million reads and maybe 10 seconds to complete.

    I would assume that you may be right when suggesting stale statistics, but I'm pretty sure we rebuild indexes and statistics periodically. I'm tempted to suggest a bug in the sql server optimiser, it would be interesting to hear if anyone else have ever encountered a similar issue.

    I will check with our tech guys what we are doing in the index/statistics rebuild department and update this thread with that information. 🙂

    EDIT: You also mention parameter sniffing. You are suggesting that the optimizer may be incorrectly judging the incoming parameters to choose a table scan instead of using the indexes, and have saved this as the precompiled way to execute the query?

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • Im with Lynn on this one..

    In times when queries were running slower than usual I found it to be either the stats or the proc cache.

    Simply adding the recompile (as you said) or altering the sp clears it from the Proc cache.

    Also make sure that your stats job is after the index job.. Had company that ran the stats before...

    JL

  • I'm with Lynn too. It's either stats or parameter sniffing or both.

    To get the execution plans, you can query the dynamic management objects to pull it out of cache directly, even on the production system, with no issues. If you really need to, you can also use a server side trace to capture the plans, but you need to be very cautious with that approach since it will move lots of data around on your server.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Table definitions, index definitions and execution plans please.

    Does sound like parameter sniffing. There's a 3-part series on my blog (http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing/) on parameter sniffing. It may be worth a read.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/14/2009)


    Table definitions, index definitions and execution plans please. Refer to the 2nd article in Lynn's sig (How to post performance problems) for details on how to post them.

    I'd be happy to share them - but again, how is that relevant when the indexes are working just fine when the SP are run manually, and were also working fine in production until recently? The execution plan will no doubt show that a table scan is being performed.

    I will investigate our maintenance plans in regard to statistics and see if there can be anything fishy there. Thanks for all the replies so far 🙂

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • sql_lock (12/14/2009)


    Also make sure that your stats job is after the index job.. Had company that ran the stats before..

    Doesn't matter if it's before or after, unless it's doing a sampled update. If it's a sampled update then doing it after the indes rebuild is a very bad idea as it reduces the accuracy of the stats. Doing a sampled update before a rebuild is just a waste of time. (update the stats sampled then update the index stats with fullscan as part of the rebuild)

    Preference (and what I recommend) is that if you do have an explicit stats job then either before or after index rebuild (completely immaterial which) but only update the stats that are not associated with the indexes that have/will be rebuilt and do the update with full scan.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • one-1016367 (12/14/2009)


    I'd be happy to share them - but again, how is that relevant when the indexes are working just fine when the SP are run manually, and were also working fine in production until recently? The execution plan will no doubt show that a table scan is being performed.

    It may indicate that there's an opportunity for better indexes or a change in the query, or it might not be useful at all. But the best thing to do is give all the info, even if later it turns out that some wasn't necessary. The alternate, only give just what's required and have people ask for bit more, bit more, bit more, etc just drags things out.

    I'd like to see both execution plans, the one where it's slow and the one where it's fast. It's what's different between them that's interesting and should give a good hint as to exactly what's wrong.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Doesn't matter if it's before or after, unless it's doing a sampled update. If it's a sampled update then doing it after the indes rebuild is a very bad idea as it reduces the accuracy of the stats. Doing a sampled update before a rebuild is just a waste of time. (update the stats sampled then update the index stats with fullscan as part of the rebuild)

    Preference (and what I recommend) is that if you do have an explicit stats job then either before or after index rebuild (completely immaterial which) but only update the stats that are not associated with the indexes that have/will be rebuilt and do the update with full scan.

    Could it be that the connection settings for the application have changed? You said that the SP runs ok when executed manually....

    If you have different connections setting (SET commands) within SSMS and your application then this could force sql to use a different execute plans. One way to do this is to profile the SP that is executed and extract the SET commands.

  • Hi,

    I got a reply from our tech guys, who said that auto update statistics was set to true, and in addition to that they were running sp_updatestats once a day.

    Shouldn't that eliminate stale statistics? I read briefly about the sp_updatestats procedure, and I see that it has a parameter called @resample. Should that be on or off?

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • Today the SP runs fast and is yet again using the indexes, so it appears that the sp_statistics that were run tonight did the trick.

    So I guess it was indeed a case of stale statistics.. But isn't it a bit weird that this happened, since we run sp_statistics every night? The table has maybe 13 million rows, and only increases with about 50,000 rows per day, so there isn't much change in the table data.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • I don't think this is a stats problem. If it was, the proc would have run slow everywhere. Slow from some places and fast from others is not a symptom of stale statistics. It is, as I, Grant and Lynn initially suggested, parameter sniffing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • So, should I do something along the lines of this?

    http://elegantcode.com/2008/05/17/sql-parameter-sniffing-and-what-to-do-about-it/

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • It's one of the solutions. I referred you several posts ago to a 3-part series that I wrote on parameter sniffing - http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing/

    Or, if you'd like some specific and targeted advice, post the query and, if you can get them, execution plans both when it's fast and when it's slow.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 40 total)

You must be logged in to reply to this topic. Login to reply