Hash Match

  • gbritton1 (5/1/2014)


    Before I answered this question, I read

    It reads, in part:

    The hash join has two inputs: the build input and probe input. The query optimizer assigns these roles so that the smaller of the two inputs is the build input.

    then later:

    The hash join first scans or computes the entire build input and then builds a hash table in memory.

    I'm having trouble reconciling the official description with the "correct" answer.

    The only correct answer is "the first input" - allthough there is some room for confusion, based on the interpretation of "first".

    Because of the usage of the term "hash match" (a plan operator, not a query construct) and "input" (not "table" or "data source", terms more associated with the query), I conclude that "first" relates to the order in which icons are rendered in the graphical execution plan (assuming that most languages write and read bottom to top), the order in which operators are speicified in both the (outdated) text representation and the XML representation of a plan, and the order in which inputs are consumed when the plan executes - those order all align,

    Look at an execution plan - if you see a hash match operator performing a join, the top input (aka "build input") will be consumed first in order to build the hash table; the second input ("probe input") will then be consumed second.

    The hash match algorithm for joins is most efficient when the build input is small and the probe input is large. The optimizer knows this - so if possible, the optimizer will rearrange the order of joins in order to get the smaller input at the top (and the optimizer can do that even in the case of outer or semi joins, because hash match supports all logical join types in both the right and the left variety).

    So the top ("first") input does not always correspond to the first-mentioned data source. But there is no guarantee that the smaller input will always be processed first. The optimizer can make a mistake - maybe statistics were out of date or skewed, maybe one of the sources is already the effect of several joins and filtering, making the statistics necessarily lesser quality. Or maybe hints in the query have forced an order.

    The TechNet documentation quoted above describes the ideal behaviour of optimizer + execution engine. It even explicitly says so: "the query optimizer assigns these roles so that the smaller of the two inputs is the build input".

    The QotD focuses only on the execution engine, taking the compiled and optimized plan as a given. I was able to derive that from some wording of the question, but I do agree that this could have been clearer.

    (And for the really nitpicky people - there is one edge case scenario where the second input will actually eventually be used to build the hash table, but I have never knowlingly witnessed it. When the first input is much larger than estimated, the query will have insufficient memory to store the hash table. In such a case, the execution engine will usually decide to spill a part of the hash table to tempdb - multiple times if needed. However, there is also an alternative mechanism called "role reversal" where at runtime the execution engine decides that probably the second input may actually be smaller - and at that time the already built part of the hash table will be spilled to tempdb and the operator will start building a new hash table based on the second input.

    Again, really edge case, and I have never seen it happen. If anyone has a script that repros this behaviour, I'd love to see it!)


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (5/1/2014)

    The TechNet documentation quoted above describes the ideal behaviour of optimizer + execution engine. It even explicitly says so: "the query optimizer assigns these roles so that the smaller of the two inputs is the build input".

    The QotD focuses only on the execution engine, taking the compiled and optimized plan as a given. I was able to derive that from some wording of the question, but I do agree that this could have been clearer.

    Thanks for the clarification on this. At least now I have some reference for the different interpretations of the QOTD that we have seen in this thread.

    Brian

  • Given the options: first, larger, smaller, or second input, I still say that 'smaller input' is closest to the correct answer.

    On his MSDN blog, Craig Freedman goes into some detail about the hash join, and he states that the choice of which input is used for building the hash table is cost based with a preferance for the smaller of the two tables. He was a member of the query execution team that developed the scan, seek, and join operators in SQL Server 2005, so he must know a good deal about the internals of how the algorithm works. It wouldn't be the first time that the official documentation for an application doesn't correspond exactly with the technical implementation, especially when it comes to something like this.

    ..Before a hash join begins execution, SQL Server tries to estimate how much memory it will need to build its hash table. We use the cardinality estimate for the size of the build input along with the expected average row size to estimate the memory requirement. To minimize the memory required by the hash join, we try to choose the smaller of the two tables as the build table..

    http://blogs.msdn.com/b/craigfr/archive/2006/08/10/687630.aspx

    I've experimented using the sample tables T1 (1,000 rows) and T2 (10,000 rows) provided in the blog post above, and regardless of the left or right position of each in the JOIN clause or ON expression, SQL Server choses the smaller T1 table for building the hash.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • PHYData DBA (5/1/2014)


    jackson.fabros (5/1/2014)


    still unclear what the correct answer is. my textbook also mentions that "the hash table is built from the smaller input"

    Is this exactly what your textbook says?

    Let's hope not since the build input should be made from the smaller set and the probe input from the remainder in a HASH JOIN.

    A "hash table" would be something else but not sure how it would have different inputs. :hehe:

    There are even HASH JOIN where the build and probe inputs are the same thing...

    I may have misquoted that piece of text. actually dont have the book in front of me at the moment. I'm sure I was thinking of what you just said, "build input should be made from the smaller set"

    thanks for confirming

  • From my course notes from SQLSkills IE1, module 10 Indexing strategies (Taught by Kimberly Tripp from sqlskills.com), Slide 37, Join Strategies :

    "Hash join

    Two-phsae operation (build, then probe): Build table(smaller set) and probe table (larger set) allowing SQL to join extremely large sets - in MEMORY (can spill"

    I just can't make this reconcile with the official answer.

    Simon.

  • Interesting Question,

    Hope this helps...

    Ford Fairlane
    Rock and Roll Detective

  • Good One, thanks.

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • This was removed by the editor as SPAM

  • PHYData DBA (5/1/2014)


    TomThomson (5/1/2014)


    PHYData DBA (5/1/2014)


    TomThomson (5/1/2014)


    gbritton1 (5/1/2014)


    Before I answered this question, I read

    It reads, in part:

    The hash join has two inputs: the build input and probe input. The query optimizer assigns these roles so that the smaller of the two inputs is the build input.

    then later:

    The hash join first scans or computes the entire build input and then builds a hash table in memory.

    I'm having trouble reconciling the official description with the "correct" answer.

    Without looking anything up, I answered smallest (ie build) input first. When I saw the answer, I constructed a test (using SQL Server 2012) to see what happened. Given one table with 65537 rows and another with 393222 rows (the sizes of teh test tables I built - admittedly not very big, but big enough to refute a silly "correct" answer) I found that whether I wrote the join so that the small table was the first or second referenced in the select clause and whether the small table was teh first or second listed in teh from clause, the actual (not estimated but actual) execution plan took the smaller table as first (ie build) table.

    So I don't just have trouble reconciling the official correct answer with the documentation, I have clear and solid experimantal evidence that the officially correct answer is just plain wrong.

    Tom - See my earlier post... there is no need for the answer to be wrong ( run test using HASH join hint) to match with your tests so far. When writing a hash join using the Hint you put the smaller to the right to make it the build table. Try you experiment with the Hash Join hint and see what happens.

    Your experiment proves what the Query Optimizer does, but not how it is done.

    Of course I would not know about any of this if I had not learned all of this by having to work with an Execution Plan that for some reason had chosen the Larger (2 million records) set as the Build list years ago.

    I suppose it depends on what the answer means by "first". If it means the "the table which ir processed first in order to build the the hash table against which rows from teh other table will be tested" then " that's first" is a pointless tautology. It's not at all clear to me that any sane person can take it as meaning that. If on the other hand it means that the table used as build table can be any of the tables involved, regardless of size, which is the only sensible interpretation of the words given that (a) "is the one that is used first used first?" is not a sensible question and (b) the statement "Size does not matter for this operator" in the explanation it seems clear both that the BOL documentation quoted by gbritton1 indicates that the "correct" answer is wrong fpr SQL 2008R2 (not a terribly interesting argument, given how often BOL gets it wrong, but in fact - see below - BOL didn't go wrong here) and that the experiment I undertook after seeing this crazy answer proved conclusively that the "correct" answer is wrong for SQL 2012. It may of course be different for SQL 2014, but there are several releases currently in full support and we can see that it is wrong for the majority of them (I ran the experiment for 2008 R2 as well before composing this reply, and this is not one of the places where BOL got it wrong).

    So we have to work out what Jason (author of the referenced document) meant by "first(top)" and "second(bottom)". Jason doesn't usually get things wrong. I think he was referring to the layout of the operators in the query plan in the standard query plan display provided by SSMS - certainly that's what I thought he meant when I read it a month or two ago. There's no imaginable way that today's question could be interpreted so that "first" mant that, so I think that Steve has misunderstood what Jason was saying and produced a question and answer based on that misunderstanding.

    It seems you are once again ranting and/or trolling for something you are not going to find here.

    Have fun with that.

    I trust you enjoyed demonstrating that you too arrogant to be capable of rational and polite debate.

    The only way to make this question seem correct is to assume that "first" in the answer means "occurring nearest to top left in the query plan displayed (as actual query plan) by SSMS". That is true even in the event of role reversal, since the data engine starts processing the input picked as build by the optimiser first and will not switch before a spill to disc, so the optimiser's chosen build input is handled before the acess to the other input (the actual build input) begins. If you don't understand that, I suggest you read the page again, carefully.

    Tom

  • interesting question..

    thanks Steve.

  • PHYData DBA (5/1/2014)


    See lots of discussion about why the right answer is...

    Here is a reference to the right answer from TechNet BOL.

    Looks like this answer to this question has not changed for at least 15 years...

    http://technet.microsoft.com/en-us/library/aa237090(v=SQL.80).aspx

    For any joins, use the first (top) input to build the hash table and the second (bottom) input to probe the hash table. Output matches (or nonmatches) as dictated by the join type. If multiple joins use the same join column, these operations are grouped into a hash team.

    😎

    Thanx 4 the reference.

    Thanks & Best Regards,
    Hany Helmy
    SQL Server Database Consultant

  • Never used this one before, it`s always good to know something new here.

    Thanks & Best Regards,
    Hany Helmy
    SQL Server Database Consultant

  • I think a much better question would be, "What determines what will be the first (top) table (according to books online) in a hash match join"? 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (5/3/2014)


    I think a much better question would be, "What determines what will be the first (top) table (according to books online) in a hash match join"? 😉

    There's no reliable way to determine this. It should be the smaller input most of the time, but sometimes the optimizer doesn't get that one and that usually means the DBA's phone rings with a complaint.

  • Steve Jones - SSC Editor (5/5/2014)


    Jeff Moden (5/3/2014)


    I think a much better question would be, "What determines what will be the first (top) table (according to books online) in a hash match join"? 😉

    There's no reliable way to determine this. It should be the smaller input most of the time, but sometimes the optimizer doesn't get that one and that usually means the DBA's phone rings with a complaint.

    Yep... that's my whole point. It would be wonderful to see a QOTD where the correct answer is, "It Depends". 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 16 through 30 (of 35 total)

You must be logged in to reply to this topic. Login to reply