Converting a table variable into a permanent table

  • What is the most efficient way of converting a table variable into a permanent table?

    I am currently doing the following (X is the permanent table and @Y is the table variable):

    SELECT * INTO X FROM @Y;

    Is there a quicker more direct way?

    Thanks,

    Jamie

  • That's what I would use.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • I'll second that.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks, guys.

    The reason I asked is because the table in question is very large (about 1,000,000 records). Using SELECT INTO takes a couple of minutes to complete and creates so much disk traffic that other processes are timing out.

    I have meanwhile "solved" this by replacing SELECT INTO with INSERT INTO in a loop that breaks up the copying into batches. This takes much longer, but I can live with that. More important that the copying not interfere with other processes.

    Thanks again,

    Jamie

  • Jamie Julius (9/18/2010)


    Thanks, guys.

    The reason I asked is because the table in question is very large (about 1,000,000 records). Using SELECT INTO takes a couple of minutes to complete and creates so much disk traffic that other processes are timing out.

    I have meanwhile "solved" this by replacing SELECT INTO with INSERT INTO in a loop that breaks up the copying into batches. This takes much longer, but I can live with that. More important that the copying not interfere with other processes.

    Thanks again,

    Jamie

    That's odd... SELECT INTO for a million rows takes very little time to run. Consider the following code...

    SELECT TOP 1000000

    SomeID = IDENTITY(INT,1,1),

    SomeInt = ABS(CHECKSUM(NEWID()))%50000+1,

    SomeLetters2 = CHAR(ABS(CHECKSUM(NEWID()))%26+65)

    + CHAR(ABS(CHECKSUM(NEWID()))%26+65),

    SomeCSV = CAST('Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10' AS VARCHAR(80)),

    SomeMoney = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 AS MONEY),

    SomeDate = CAST(RAND(CHECKSUM(NEWID()))*3653.0+36524.0 AS DATETIME),

    SomeHex12 = RIGHT(NEWID(),12)

    INTO dbo.JBMTest

    FROM Master.dbo.SysColumns t1,

    Master.dbo.SysColumns t2 --Lack of join criteria makes this a CROSS-JOIN

    That only takes 16 seconds on my single CPU 8 year old desktop.

    What is it that you're doing to "gather" the million rows? This should NOT cause any timeouts. Can you post the code, please?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • The rows are already available in a table variable, so the issue is not the time to gather them.

    You are right that SELECT INTO is normally very fast. And the code is essentially the one line shown above.

    There are a lot of other processes happening concurrently and they are the ones occasionally throwing timeouts.

    Since the SELECT INTO is going into a brand new table from a local variable, I do not believe that any locking is behind the timeouts nor the time it takes to perform the SELECT INTO. Rather, it is the heavy disk activity, which is hindering the execution of all other concurrent activity and causing the SELECT INTO to take a couple of minutes.

    I believe that this is simply the price of having a lot going on at the same time.

    Thanks,

    Jamie

  • Jamie Julius (9/19/2010)


    Rather, it is the heavy disk activity, which is hindering the execution of all other concurrent activity and causing the SELECT INTO to take a couple of minutes.

    You might want to talk to your network/SAN administrator about this. That's going to end up as a huge bottleneck as you grow if you're already running into it.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • I absolutely agree with what Craig just said...

    Another thing about so much disk activity is that it doesn't take very many, ummmm... shall we say, performance challenged queries to foul up the pipe for disk activity. My recommendation would be to find the top 3 queries for duration, the top 3 queries for reads and writes, and the top 3 queries for being run in an hour (or minute) and spend some serious time tuning those. Then, do it all again... top 3 of each category until you can do a million row SELECT INTO without the whole cupcake turning over on the carpet. At least then the disk system will be out of the woods.

    Don't wait... the disk system is already a bottleneck for the existing (bad) queries running against the existing data. It's only going to get worse... 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jamie Julius (9/18/2010)


    The reason I asked is because the table in question is very large (about 1,000,000 records).

    Still using table variable for heavy volume of data. i would suggest to use temp table instead.

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • do wait stats, file IO stall and blocking analysis while you are doing your select into. that will expose the real cause of the problem I suspect.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thanks to all of you for your advice. I'll follow up as soon as possible...

    Jamie

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply