Slow query - client or server issue

  • leehbi - Tuesday, November 28, 2017 6:00 AM

    Plans attached.  I will explore extended events.   Good tip about SQL Sentry Plan Explorer.

    Those don't actually help.  At least the "slow query" version appears to be an "estimated plan" rather than an "actual plan".  While there is usually little difference, there are enough tables in this bad boy where there could be a huge number of differences.  We need to see the actual plans, please.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • The 2 plans came via sp who - I've ran the query to get the actual plan see attached.

  • Grant Fritchey - Tuesday, November 28, 2017 4:30 AM

    Jason A. Long - Monday, November 27, 2017 11:47 AM

    I'll 2nd that... Plan Explorer really is an awesome tool that provides a ton more information than SSMS and it presents that information far more usable format (IMO).

    I am a little torn on the "far more usable format" bit. It can be. However, since it doesn't, by default, display all properties, we're dependent on them picking the right ones to display. I think they largely nail it, but I'll bet there are misses in there. It's my one critique since we are so dependent on those properties when you really start working with plans.

    No arguments from me on anything you've said here. I suppose I should have said SSMS's execution plan view, rather than SSMS...
    I never meant to imply that you couldn't use SSMS to roll your own solution using TSQL, DMOs and Extended Events... That said, given the amount of information that PE provides, creating a "roll your own", that beats it, wouldn't be a trivial undertaking.

  • leehbi - Tuesday, November 28, 2017 7:56 AM

    The 2 plans came via sp who - I've ran the query to get the actual plan see attached.

    Was that a good fast run or a slow run?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • slow query

  • Very lucky to be able to adjust the underlying data structure (Data warehouse) - by reducing the plan complexity this problem looks to have gone away. Definitely an odd one - I would not like to be a DBA with a fixed OLTP system.

  • Are your Statistics up to date?
    I see it estimated 1.5 m rows but the actual was 4.2 m rows.
    Then the sort spilled to TempDB.

  • I spotted that but the cardinality issue was consistent in each fast and slow query plan. I didn't expect it cause the deviance in performance especially as the data is very similar in size.  The plans are now simpler and more consistent which is a good result.

  • leehbi - Thursday, November 30, 2017 2:15 AM

    Very lucky to be able to adjust the underlying data structure (Data warehouse) - by reducing the plan complexity this problem looks to have gone away. Definitely an odd one - I would not like to be a DBA with a fixed OLTP system.

    What adjustments did you actually make?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I denormalised some of the underlying tables - this resulted in a much simpler plan.   Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.

  • leehbi - Tuesday, December 5, 2017 3:51 AM

    I denormalised some of the underlying tables - this resulted in a much simpler plan.   Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.

    Oh, be careful ... I've found that it has nothing to do with big tables... it usually has to do with bad code, missing indexes, non-SARGable code, returning too much data behind the scenes, accidental many-to-many joins covered up by a DISTINCT or GROUP by, people trying to do it all in a single monster query be cause they incorrectly think "Set Based" means "all in one query", accidental RBAR in the form of rCTEs, non-iTVF functions, joining to calculated columns in views, etc, etc, but no problems with it handling big tables.

    To be honest, you've probably created a future "world of hurt" by denormalizing tables to make your query work.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • 3 obvious problems from the plans:

    1) Key lookup on 240k recordset (Object3). Certainly wrong choice of the clustered index.
    2) 4.3 mil rows scanned from Object4.Index4 only to be filetered down to 60k rows relevant to the Hash Join. Smells like an implicit conversion or a function applied to the joined column.
    3) 11GB are read from Object2.Index6 following by some kind of data transformation preparing for a join to Object6 which leaves only 3.6MB after the matching. Seems like the same kind of error like in 2).

    BTW, fast and slow plans look identical.
    I'd suspect that fast runs happen when the cached data from previous runs are still sitting in memory, so there is no need to re-read 12GB of data from disk.

    _____________
    Code for TallyGenerator

  • leehbi - Tuesday, December 5, 2017 3:51 AM

    I denormalised some of the underlying tables - this resulted in a much simpler plan.   Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.

    What do you mean by "big"? This morning I'm working with a table which has over 12,000,000,000 rows. Performance is fine.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply