Filtered Out Results Still Interfering w/Query

  • rbarryyoung (10/24/2008)


    ...if that might cause a similar data error or change the contents of the result set in any way.

    Couldn't this cause situation where an execution plan that caused no errors / evaluated correctly one minute would be incorrect(and return incorrect information) a minute later after more data was added to the table?

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • I don't follow?

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Yeah, I rewrote that like 8 times, I'm not even sure I understood it by the time I finished.

    In any case, you're saying that the optimizer can evaluate things out of order if they return the same result, even things such as case statements. However, just because they return the correct result at the time the optimization plan was created, doesn't mean they're going to return the correct result after rows have been added to the table... at least until the execution plan gets re-created. Or at least it seems that there would be a loophole there.

    Is it sometimes necessary to drop/recreate execution plans for reasons of them returning the wrong information? (I know they can become inefficient, but can they become straight up wrong?)

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • What transformations are or are not valid has nothing to do with what data is actually in the tables, only with what data could possibly be in the tables. Since that does not change unless the table definition changes (or the query changes), there should be no "loophole" with respect to validity.

    On the other hand, which of those valid possible transformations the optimizer actually uses is based on the data content of the table (specifically the indexes and statistics). Since, as you have pointed out, this can change, any saved query plan may no longer be optimal by the time the query is actually run. It will still be correct, but it could be terribly slow and this is of course is where "parameter-sniffing" type problems come from.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Gotcha. Thanks for the explanation.

    Seth Phelabaum


    Consistency is only a virtue if you're not a screwup. 😉

    Links: How to Post Sample Data[/url] :: Running Totals[/url] :: Tally Table[/url] :: Cross Tabs/Pivots[/url] :: String Concatenation[/url]

  • rbarryyoung (10/24/2008)


    So, let me check ... yes, the SOP way to address this in SQL Server is to hide it behind a CASE function:

    declare @wh varchar(2)

    set @wh='10';

    Select obitno, oaorsc

    From hsdet inner join hshed on obhssq=oahssq

    Where oainvd>20050100

    And @wh = (Select Case when oainvd>20050100

    Then convert(int,oaorsc)

    Else NULL End)

    Hmm, you might be able to improve that some ... 🙂

    The case statement worked fine.

Viewing 6 posts - 31 through 35 (of 35 total)

You must be logged in to reply to this topic. Login to reply