September 17, 2010 at 3:47 pm
What is the most efficient way of converting a table variable into a permanent table?
I am currently doing the following (X is the permanent table and @Y is the table variable):
SELECT * INTO X FROM @Y;
Is there a quicker more direct way?
Thanks,
Jamie
September 17, 2010 at 4:00 pm
That's what I would use.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
September 17, 2010 at 5:29 pm
I'll second that.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 18, 2010 at 2:37 am
Thanks, guys.
The reason I asked is because the table in question is very large (about 1,000,000 records). Using SELECT INTO takes a couple of minutes to complete and creates so much disk traffic that other processes are timing out.
I have meanwhile "solved" this by replacing SELECT INTO with INSERT INTO in a loop that breaks up the copying into batches. This takes much longer, but I can live with that. More important that the copying not interfere with other processes.
Thanks again,
Jamie
September 19, 2010 at 1:29 pm
Jamie Julius (9/18/2010)
Thanks, guys.The reason I asked is because the table in question is very large (about 1,000,000 records). Using SELECT INTO takes a couple of minutes to complete and creates so much disk traffic that other processes are timing out.
I have meanwhile "solved" this by replacing SELECT INTO with INSERT INTO in a loop that breaks up the copying into batches. This takes much longer, but I can live with that. More important that the copying not interfere with other processes.
Thanks again,
Jamie
That's odd... SELECT INTO for a million rows takes very little time to run. Consider the following code...
SELECT TOP 1000000
SomeID = IDENTITY(INT,1,1),
SomeInt = ABS(CHECKSUM(NEWID()))%50000+1,
SomeLetters2 = CHAR(ABS(CHECKSUM(NEWID()))%26+65)
+ CHAR(ABS(CHECKSUM(NEWID()))%26+65),
SomeCSV = CAST('Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10' AS VARCHAR(80)),
SomeMoney = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 AS MONEY),
SomeDate = CAST(RAND(CHECKSUM(NEWID()))*3653.0+36524.0 AS DATETIME),
SomeHex12 = RIGHT(NEWID(),12)
INTO dbo.JBMTest
FROM Master.dbo.SysColumns t1,
Master.dbo.SysColumns t2 --Lack of join criteria makes this a CROSS-JOIN
That only takes 16 seconds on my single CPU 8 year old desktop.
What is it that you're doing to "gather" the million rows? This should NOT cause any timeouts. Can you post the code, please?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 3:40 pm
The rows are already available in a table variable, so the issue is not the time to gather them.
You are right that SELECT INTO is normally very fast. And the code is essentially the one line shown above.
There are a lot of other processes happening concurrently and they are the ones occasionally throwing timeouts.
Since the SELECT INTO is going into a brand new table from a local variable, I do not believe that any locking is behind the timeouts nor the time it takes to perform the SELECT INTO. Rather, it is the heavy disk activity, which is hindering the execution of all other concurrent activity and causing the SELECT INTO to take a couple of minutes.
I believe that this is simply the price of having a lot going on at the same time.
Thanks,
Jamie
September 19, 2010 at 3:43 pm
Jamie Julius (9/19/2010)
Rather, it is the heavy disk activity, which is hindering the execution of all other concurrent activity and causing the SELECT INTO to take a couple of minutes.
You might want to talk to your network/SAN administrator about this. That's going to end up as a huge bottleneck as you grow if you're already running into it.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
September 19, 2010 at 4:06 pm
I absolutely agree with what Craig just said...
Another thing about so much disk activity is that it doesn't take very many, ummmm... shall we say, performance challenged queries to foul up the pipe for disk activity. My recommendation would be to find the top 3 queries for duration, the top 3 queries for reads and writes, and the top 3 queries for being run in an hour (or minute) and spend some serious time tuning those. Then, do it all again... top 3 of each category until you can do a million row SELECT INTO without the whole cupcake turning over on the carpet. At least then the disk system will be out of the woods.
Don't wait... the disk system is already a bottleneck for the existing (bad) queries running against the existing data. It's only going to get worse... 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
September 20, 2010 at 5:17 am
Jamie Julius (9/18/2010)
The reason I asked is because the table in question is very large (about 1,000,000 records).
Still using table variable for heavy volume of data. i would suggest to use temp table instead.
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
September 20, 2010 at 11:59 am
do wait stats, file IO stall and blocking analysis while you are doing your select into. that will expose the real cause of the problem I suspect.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 20, 2010 at 2:51 pm
Thanks to all of you for your advice. I'll follow up as soon as possible...
Jamie
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply