performance problems when using table datatype

  • Hello,

    Anybody else experience performance problems when using the "new" datatype table in SqlServer 2000 (DECLARE @tablename TABLE ...)?

    We had a stored procedure than took about 5 hours to finish. It has several table datatypes in it. When we replaced one (rather large, 75.000 records) with the classic local temporary table (create table #tablename...) the time to completion dropped to 40 minutes. From the Microsoft Books Online we learn that the table datatype should run faster than the local table. Have they forgotten to mention that it is not for large tables or is this code still buggy because it is a new concept or did we do something wrong?

    TIA

    Gunther

  • Hi Gunther,

    When using mem tables with large amount of rows, I suspect that SQL starts dumping them to disk.

    I tried the differences between the normal temp and mem tables.

    With small amounts of data, mem tables can be faster. Sometimes.

    I have said this previously and no one commented so here goes again. 🙂

    I am sure I read somewhere that a normal table is held in memory until such time as SQL decides to dump it to disk (Swap file?)

    I have often had normal tables with small amounts of data run faster than mem tables and visa-versa.

    I guess there is a threshold which. Try each way and see. As always, use the fastest.

    I would definitely stay away if you want 75000 odd rows in it.

    As for the above statement, can someone agree / disagree about SQL handling normal temp tables? Curious.....

    Cheers,

    Crispin

    Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!

  • quote:


    Hi Gunther,

    When using mem tables with large amount of rows, I suspect that SQL starts dumping them to disk.

    I tried the differences between the normal temp and mem tables.

    With small amounts of data, mem tables can be faster. Sometimes.

    I have said this previously and no one commented so here goes again. 🙂

    I am sure I read somewhere that a normal table is held in memory until such time as SQL decides to dump it to disk (Swap file?)

    I have often had normal tables with small amounts of data run faster than mem tables and visa-versa.

    I guess there is a threshold which. Try each way and see. As always, use the fastest.

    I would definitely stay away if you want 75000 odd rows in it.

    As for the above statement, can someone agree / disagree about SQL handling normal temp tables? Curious.....

    Cheers,

    Crispin


    http://support.microsoft.com/?id=305977

    --Jonathan



    --Jonathan

  • quote:


    As for the above statement, can someone agree / disagree about SQL handling normal temp tables? Curious.....


    AFAIK table variables are 'real' tables in tempdb.

    Frank

    http://www.insidesql.de

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • After reading the article and BOL I found out that I am right. Only because someone told me about it...

    Both are kept in temp and both are held in memory until such tim as SQL Decided to dump to disk.

    quote:


    Q4: Are table variables memory-only structures that are assured better performance as compared to temporary or permanent tables, because they are maintained in a database that resides on the physical disk?

    A4: A table variable is not a memory-only structure. Because a table variable might hold more data than can fit in memory, it has to have a place on disk to store data. Table variables are created in the tempdb database similar to temporary tables. If memory is available, both table variables and temporary tables are created and processed while in memory (data cache).


    It does say stay away if you have to use indexes or large joins as mem tables do not have stats.

    Cheers,

    Crispin

    Cheers,CrispinI can't die, there are too many people who still have to meet me!It's not a bug, SQL just misunderstood me!

  • The difference between 5 hours and 40 minutes is great, but I have a feeling that you could cut even more time off this.

    Basically, follow the rules of building regular tables when building temp tables.

    1) limit the amount of DATA. IE: Rowsize * Number of rows. 75,000 integers takes up 300 K of memory, while a varchar(255) field could take up 20 megs!

    2) Try adding a primary key. Many times the cost of buiding the index more than makes up in a faster executing query. Test, though. (Remember, Primary Key's can be composite)

    3) Normalize. 🙂 I know this sounds weird, but it REALLY works. taking a monster temp table and normalizing it can do more for your execution time than anything else.

    Signature is NULL

  • quote:


    AFAIK table variables are 'real' tables in tempdb.

    Frank

    http://www.insidesql.de


    They're not subject to transaction logging to disk, as are temporary tables.

    I suspect the culprit for the OP's results is that his SP uses the temporary table in a way that it triggers run-time compilation, and these recompilations create better query plans for a large table structure. Table variables do not trigger these recompiles as they grow, which can often be a good thing but in this case is a bad thing.

    --Jonathan



    --Jonathan

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply