Performance degradation with a particular stored procedure

  • Hello,

    I'm attempting to make sense out of a performance issue I'm seeing with one of our applications.  Basically, there's a screen in the app that displays equipment records, and a search filter that our users can make use of.  Throughout the day, several users are running searches on this equipment screen.  Some of these searches might return 2 records, some might return 2,002.

    As the day progresses, the searches on this screen, which may perform reasonably well at one point during the day, hit a point at which they become very sluggish and start timing out.

    The underlying SQL that appears to be performing erratically is a call to the system sp_cursoropen procedure.  The select statement for this equipment screen and the where clause containing the filter values are passed into this sproc call as the statement argument.

    I have noticed that rebuilding indexes on a couple of the tables referenced in that query statement can improve performance temporarily.  Based on my limited SQL knowledge, my current assumption is that the issue is related to parameter sniffing and the caching of bad execution plans, where are getting invalidated by the rebuilding of indexes.

    At this point, I'm looking for any suggestions on how one might generally resolve such an issue.  The application is by a third-party, and we are limited in our ability to customize it.

    As a short term band aid, is sp_recompile, when used on a table, a preferable and less disruptive means of invalidating a bad execution plan when compared to rebuilding indexes during work hours?

    Does the behavior I'm describing sound like a bad execution plan that might be attributable to parameter sniffing, or might there be other possibilities I should be looking to explore?

    Do queries with a greater number of joins have a greater likelihood of ending up with a bad execution plan?

    I greatly appreciate the help!

  • tarr94 wrote:

    I have noticed that rebuilding indexes on a couple of the tables referenced in that query statement can improve performance temporarily.

    That causes a recompile of anything that uses those indexes.  It's likely that recompile that's helping and not the rebuilds themselves.

    tarr94 wrote:

    Hello,

    At this point, I'm looking for any suggestions on how one might generally resolve such an issue. The application is by a third-party, and we are limited in our ability to customize it.

    You paid good money for the application, correct?  Get those boogers on the horn and have them fix it or tell you how to fix it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • tarr94 wrote:

    Do queries with a greater number of joins have a greater likelihood of ending up with a bad execution plan?

    I greatly appreciate the help!

    Steve said the essential - but just clarifying the above.

    answer is yes and no - likelihood  is always subjective when it comes to parameter sniffing.

    once on a system I worked on when we did a upgrade to a newer version that setting got forgotten - a simple query (select XXX from table where keyfield = parameter) would work fine very first time it executed (first batch) then on second batch it would misbehave and take hours - reason was that on the second batch (running after a restart of the instance) the very first value used on the query (this was a loop based application) would retrieve 40% of the records on the table (million of rows) - so engine would to a tablescan for that one... and for EVERY single entry of the following 9000 keys we had to process.

  • Thanks for the replies Jeff and Frederico!

    Unfortunately, we're in sort of a weird spot as far as going back to the vendor.  Since the data that populates on the equipment screen is table-driven, our company made the decision some years ago to update those tables to add our own additional columns to the screen.  This has brought some good value to our users, especially since the equipment screen out of the box is pretty limited as far as the columns that display.  Unfortunately, as we've added more columns, the screen has gotten slower, and since the vendor doesn't really sanction these sorts of customizations, they will almost certainly brush off our requests for them to look at performance, since their out-of-the-box version runs noticeably faster.

    Which is unfortunate, since we haven't added any new customizations in about a year, but we did recently just upgrade to a newer version of their software, and that appears to be when the performance issues took a noticeable turn for the worse.  Hence, why I'm trying to do everything I can within my power to improve performance, since the users have come to depend on these customizations so much and the vendor likely won't assist in light of them.

     

  • that being the case then there is nothing preventing you from changing the stored procedure to behave better. May or not be possible but I have seen many cases of vendor provided code that could be easily (by someone experienced) be changed to do the same in fraction of the time the original took.

    so if you wish to go that route that would be an option - although you may not be able to post their code here directly which may give less chance to get help with particular aspects of it.

  • Hey Frederico,

    The slow query is a call to the sp_cursoropen, a system sproc I shouldn't be modifying.

    I'm thinking the SQL statement that gets passed into it is causing the issues.  Just a select statement with several joins to different tables and a where clause whose values depend on the user-entered parameter values.  We do have some ability to adjust it, but unfortunately, I haven't had any luck with speeding it up so far.  Running it to see the actual execution plan, I'm not getting any suggestions from SQL Server on any new indexes to create.

    One thing I tried was to adjust an inner join that looks something like this:

    SELECT
    (...)
    LocationCustomerName = Location.CustomerName
    (...)
    FROM
    (...)
    INNER JOIN Customer ON Customer.CustID = Equipment.CustID
    INNER JOIN Customer AS Location ON Location.CustID = ISNULL(Equipment.LocID, Equipment.CustID)
    (...)

    And changing it to remove the ISNULL function in the JOIN, using the ISNULL in the select instead:

    SELECT
    (...)
    LocationCustomerName = ISNULL(Location.CustomerName, Customer.CustomerName)
    (...)
    FROM
    (...)
    INNER JOIN Customer ON Customer.CustID = Equipment.CustID
    LEFT JOIN Customer AS Location ON Location.CustID = Equipment.LocID
    (...)

    But this appeared to perform worse.  I'd have thought removing the function from the join clause would speed things up a bit, but I was incorrect.  Seems I might need to do a bit more research there

    • This reply was modified 3 years, 3 months ago by  tarr94.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply