temp table vs. table variable

  • reto.eggenberger (3/29/2011)


    Hi all

    I was able to dramatically improve the procedure. But I had to rewrite the whole section. I didn't want to do this (since it's not an application we wrote). But the vendor wasn't able to bring up a solution ...

    To original procedure had the above stated select statement 3 times (one for the invoice language, one for fallback language, one for base language).

    Now I combined those 3 selects into a single one using ISNULL() to fall back onto the higher language level.

    Problem solved.

    by the way:

    If I used a large amout of data with the above mentioned query, there was no difference between the temp table and the table variable! The table variable version was only faster with a small set of data.

    I guess because the table variable got written down to disk with the large data sets...

    Grettings

    Reto E.

    You can probably get improved performance by adding indexes to those temp tables:

    1. #zeilen - add index on the zeilentyp and zeilennbr columns (in that order).

    2. #belege - add index on the internbelnum column

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • Just to disagree even more, here's a little log file I kept while I was converting @t to #t. It's starts with the baseline at 4+ M reads (after I had done all the index tuning I could. My starting baseline was over 52 M reads which is more data than what the db holds).

    The columns order is CPU, READS, WRITES, Duration in Milliseconds.

    I had like 15 temp tables in that proc. Each step is ONLY 1 change, except the last one where I changed every tables left. I know it wouldn't matter much but made debugging easier so I went on with it.

    #baseline

    212194 107 09582228616

    #fnsplit into #temp tables

    121251 866 31178920470

    #items

    6312 622 13578610276

    #Final

    6109 254 1088259078

    #all else

    8033 254 0978319204

  • WayneS (3/29/2011)


    You can probably get improved performance by adding indexes to those temp tables:

    1. #zeilen - add index on the zeilentyp and zeilennbr columns (in that order).

    2. #belege - add index on the internbelnum column

    They managed to make all fields nullable for the two temp tables and I would have to rewrite most of the code to get rid of this.

    Does it make sense (or is it even possible) to create an index on columns that allow null's?

    Going to test this later.

  • Null is a "value". So it can be used in an index.

    "value" is very loose here :w00t:.

    It makes sens to use an index there if it helps that query and has a general beneficial result on the server's load. It has nothing to do with wether the column is nullable or not.

  • WOW - quite the thread here!

    One thing I will add is that I can probably count on my fingers and toes the number of times in 15ish years I have seen an index on a temp table be helpful for overall performance. Most of the time temp tables are simply joined to once for output or subsequent processing. And when that happens it is very rarely worth the overhead of creating the index in the first place.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Its a know fact that in general we use Temp-Tables for large Data and Table variables for small data.

    You can check the following link for your doubts. So that you can understand better.

    http://www.mssqltips.com/tip.asp?tip=1556

    Best Wishes,

    JP

  • jp.emvia (3/30/2011)


    Its a know fact that in general we use Temp-Tables for large Data and Table variables for small data.

    You can check the following link for your doubts. So that you can understand better.

    http://www.mssqltips.com/tip.asp?tip=1556

    Best Wishes,

    JP

    I wish you the best, as well, especially if you follow the tip at the link you posted because it has a large amount of bad information in it. For example, the tip says you cannot create constraints against a Table Variable and that is false. It's true that you can't create NAMED constraints but you CAN indeed create constraints at table creation time including a PK, UNIQUE, and CHECK constraints. Here's proof of all of that from Wayne's fine article...

    declare @temp TABLE (

    RowID int NOT NULL,

    ColA int NOT NULL,

    ColB char(1)UNIQUE,

    PRIMARY KEY CLUSTERED(RowID, ColA))

    ... and you can bet your best pair of socks that UNIQUE constraint will build a NON-CLUSTERED INDEX behind the scenes, as well. 😉

    There are other major errors in that tip, as well. I strongly recommend you read Wayne Sheffield's article on the subject at the following link:

    http://www.sqlservercentral.com/articles/Temporary+Tables/66720/

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Here is your proof that even a SINGLE ROW TABLE VARIABLE CAN BE HORRIBLE FOR PERFORMANCE:

    create table bigtest (a int not null, b char(500) not null)

    insert bigtest

    select 1, replicate(' ', 500)

    from kgbtools.dbo.bignumbers --100000 rows here

    insert bigtest

    select number, replicate(' ', 500)

    from kgbtools.dbo.bignumbers --9 sequential rows here

    where number between 2 and 10

    create index idx on bigtest (a)

    dbcc freeproccache

    go

    set statistics IO on

    --show actual query execution plan

    go

    declare @a table (a int not null)

    insert @a values (2)

    select *

    from bigtest b inner join @a a on b.a = a.a

    --table scan hash join plan, 6668 IO BAD PLAN!!!

    create table #a (a int not null)

    insert #a values(2)

    select *

    from bigtest b inner join #a a on b.a = a.a

    --index seek nested loop join plan, 3 IO GOOD PLAN!!!

    --now try the other side of the coin

    drop table #a

    go

    dbcc freeproccache

    go

    set statistics IO on

    --show actual query execution plan

    go

    declare @a table (a int not null)

    insert @a values (1) --100K row value

    select *

    from bigtest b inner join @a a on b.a = a.a

    --table scan hash join plan, 6668 IO correct plan

    create table #a (a int not null)

    insert #a values(1)

    select *

    from bigtest b inner join #a a on b.a = a.a

    --table scan hash join plan, 6668 IO correct plan

    drop table bigtest

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Great Stuff Kevin - thanks for posting that.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 9 posts - 16 through 23 (of 23 total)

You must be logged in to reply to this topic. Login to reply