Temp Table Vs Table Variable

  • GilaMonster (12/7/2011)


    Dev (12/7/2011)


    Optimizer can create statistics on columns. Uses actual row count for generation execution plan.

    Estimated row count, not actual. The optimiser doesn't go off and count the actual number of rows that will be affected. It estimates.

    The bold statement is applicable to #tbl not @tbl.

  • Dev (12/7/2011)


    GilaMonster (12/7/2011)


    Dev (12/7/2011)


    Optimizer can create statistics on columns. Uses actual row count for generation execution plan.

    Estimated row count, not actual. The optimiser doesn't go off and count the actual number of rows that will be affected. It estimates.

    The bold statement is applicable to #tbl not @tbl.

    So is mine!

    p.s Are you interested in learning or are you intent on proving you're right?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/7/2011)


    Dev (12/7/2011)


    Yes. But it will recompile only if number of rows cross Recompilation Threshold else NO.

    And tell me, what are the recompilation thresholds for a temporary table? Since you appear to know more about this than Ninja and I do...

    pls wait... let me google it... :hehe:

  • Dev (12/7/2011)


    GilaMonster (12/7/2011)


    Dev (12/7/2011)


    Yes. But it will recompile only if number of rows cross Recompilation Threshold else NO.

    And tell me, what are the recompilation thresholds for a temporary table? Since you appear to know more about this than Ninja and I do...

    pls wait... let me google it... :hehe:

    Recompilation threshold (RT)

    The recompilation threshold for a table partly determines the frequency with which queries that refer to the table recompile. RT depends on the table type (permanent versus temporary), and the number of rows in the table (cardinality) when a query plan is compiled. The recompilation thresholds for all of the tables referenced in a batch are stored with the query plans of that batch.

    RT is calculated as follows. (n refers to a table's cardinality when a query plan is compiled.)

    •Permanent table

    •If n <= 500, RT = 500.

    •If n > 500, RT = 500 + 0.20 * n.

    •Temporary table

    •If n < 6, RT = 6.

    •If 6 <= n <= 500, RT = 500.

    •If n > 500, RT = 500 + 0.20 * n.

    •Table variable

    •RT does not exist. Therefore, recompilations do not happen because of changes in cardinalities of table variables.

  • GilaMonster (12/7/2011)


    Dev (12/7/2011)


    GilaMonster (12/7/2011)


    Dev (12/7/2011)


    Optimizer can create statistics on columns. Uses actual row count for generation execution plan.

    Estimated row count, not actual. The optimiser doesn't go off and count the actual number of rows that will be affected. It estimates.

    The bold statement is applicable to #tbl not @tbl.

    So is mine!

    p.s Are you interested in learning or are you intent on proving you're right?

    I don't argue for tiny objectives. Proving someone wrong is not my objective. Even if you prove me wrong I will learn something & remember it forever.

  • That's part of the recompilation thresholds, not the whole story. You missed the case where the table is empty to start (which a temp table would be). Read a couple more paragraphs of that whitepaper.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • SQL Server 2005 detects this special case, and handles it differently. In SQL Server 2005, recompilation threshold for such a table or indexed view is 1. In other words, even the insertion of one row in T can cause a recompilation. When such a recompilation happens, S is updated, and the histogram for S is no longer NULL. After this recompilation, however, the usual rule for recompilation threshold (500 + 0.20 * n) is followed.

    Based on this explanation if I declare a #tbl in SP (obviously it would be empty) and populate it. It will always collect stats & recompile. Once the session will close, this table would be dropped (or SP can have DROP TABLE statement as well). #tbl would be optimal if session is long & #tbl is required further more in SP.

    I will surely take it against #tbl.

  • Dev (12/7/2011)


    And that's precisely my point. Plus, it can cache complete table in memory.

    OH FOR THE LOVE OF . . . . !

    I've had that argument so many times that I don't find it funny any more. . .


    Forever trying to learn
    My blog - http://www.cadavre.co.uk/
    For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/
    For better, quicker answers on SQL Server performance related questions, click on the following...http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Cadavre (12/7/2011)


    Dev (12/7/2011)


    And that's precisely my point. Plus, it can cache complete table in memory.

    OH FOR THE LOVE OF . . . . !

    I've had that argument so many times that I don't find it funny any more. . .

    Welcome to the discussion!

    But that statement is for a table with 2 rows. 😀

  • Dev (12/7/2011)


    Welcome to the discussion!

    But that statement is for a table with 2 rows. 😀

    The number of rows is irrelevant, the statement is incorrect.


    Forever trying to learn
    My blog - http://www.cadavre.co.uk/
    For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/
    For better, quicker answers on SQL Server performance related questions, click on the following...http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Cadavre (12/7/2011)


    Dev (12/7/2011)


    Welcome to the discussion!

    But that statement is for a table with 2 rows. 😀

    The number of rows is irrelevant, the statement is incorrect.

    :w00t: Grant confirms the same by adding 'Temp tables can stay in memory too.'

  • Dev (12/7/2011)


    Cadavre (12/7/2011)


    Dev (12/7/2011)


    Welcome to the discussion!

    But that statement is for a table with 2 rows. 😀

    The number of rows is irrelevant, the statement is incorrect.

    :w00t: Grant confirms the same by adding 'Temp tables can stay in memory too.'

    Yes, but, you were differentiating between the two types at the time. I'm saying they both can cache completely to memory. It's not a differentiator. They also both make an entry into tempdb, therefore both are writing to disk, again, not a differentiator.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Dev (12/7/2011)


    Cadavre (12/7/2011)


    Dev (12/7/2011)


    Welcome to the discussion!

    But that statement is for a table with 2 rows. 😀

    The number of rows is irrelevant, the statement is incorrect.

    :w00t: Grant confirms the same by adding 'Temp tables can stay in memory too.'

    Which is precisely the point. Your statement that a table variable will somehow magically be allowed precedence over a temporary table when they are created is something that a lot of developers seem to believe. The simple fact is that if there is enough memory available, both can use it and if there isn't then on to the disk they go.

    For some reason, people seem unwilling to just test the difference when dealing with temporary data-sets. It baffles me that so many blindly follow what someone else has stated. Instead, make it a policy to test anything and everything that you learn, it's the only way to be sure how your data-structure will react.


    Forever trying to learn
    My blog - http://www.cadavre.co.uk/
    For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/
    For better, quicker answers on SQL Server performance related questions, click on the following...http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • @All: I will consider it a wise decision to quit the thread because I don't want to displease anybody here. But I am quitting with a fact 'I am not convinced for 2 rows #tbl is better over @tbl’. I am all agreed with your justifications but not for a small table.

    I am thankful to Gail for pointing out to RT 1 for #tbl. I was unaware of it.

    Thank you so much for a very healthy discussion!

  • Dev (12/7/2011)


    @All: I will consider it a wise decision to quit the thread because I don't want to displease anybody here. But I am quitting with a fact 'I am not convinced for 2 rows #tbl is better over @tbl’. I am all agreed with your justifications but not for a small table.

    I am thankful to Gail for pointing out to RT 1 for #tbl. I was unaware of it.

    Thank you so much for a very healthy discussion!

    This is where you need to start testing on your own. It's 5 minutes to set this up =>

    This is with 7 rows in #tmp, as expected auto stats kick in, check the amazingly precise estimate of the data (that just can't be a random guess).

    This is with 2 rows. Again stats kicked in.

    Same result with 1 category.

    No stats update with 0.

Viewing 15 posts - 31 through 45 (of 59 total)

You must be logged in to reply to this topic. Login to reply