June 23, 2010 at 5:04 pm
We know that a statement like 'SELECT f1, f2, f3 INTO #temp FROM Table1' will create a temp table called #temp.
But let's say my field list varies, it dynamically changes. So dynamically I'd create my SELECT statement, then use the sp_executesql statement. However, the #temp created will only be active within the sp_executesql duration. Once the line of execution goes out of the sp_executesql, the #temp is no longer available.
How would I create a #temp table from a dynamically changing SELECT statement so that I have access to it outside of the duration of an sp_executesql?
June 23, 2010 at 5:36 pm
Or put another way.
How can I create and populate a temp table, with the results from a select statement which is dynamic. The number of fields in the select statement will change dynamically.
I can't create the temp table statically. The columns would not match the dynamically changing select statement.
I need to capture the results of the select statement into a temp table. However, because of the dynamic nature, it adds this level of complexity.
June 23, 2010 at 7:15 pm
Unlike your other post, this one could probably be done but it would relegate most of your code to dynamic SQL. Why do you need to do this? What are you ultimately trying to do with the data? I'm looking for the business reason here...
--Jeff Moden
Change is inevitable... Change for the better is not.
March 23, 2016 at 2:35 pm
I needed to create temp tables that match a remote system's data, and the remote system has an ever increasing number of programatically named views that I need to get data from. To create a temp table in line by SELECT... INTO... I need to have the name of the source rowset hard coded. So I need some dynamic way to specify the remote view. Thus, I query the remote sys.object and sys.columns tables to get the meta data. Then I can do something like this where the added columns are dynamically constructed instead of being hard coded:
create table #MyTemp (starter int)
;
EXEC(N'alter table #MyTemp add
RealColumn1 int
, RealColumn2 varchar(10)
, RealColumn3 datetime
;
alter table #MyTemp drop column starter
;')
select * from #MyTemp
;
GO
drop table #MyTemp
;
When I run this, this is what I get:
RealColumn1 RealColumn2 RealColumn3
----------- ----------- -----------------------
(0 row(s) affected)
Sincerely,
Daniel
March 28, 2016 at 8:18 pm
JediSQL (3/23/2016)
Then I can do something like this where the added columns are dynamically constructed instead of being hard coded:
OK, let's pretend you've achieved what you want.
Like in the last scene of "Finding Nemo" - "Now what"?
You cannot do anything with the table apart from running SELECT * kind of queries.
Anything else would require to mention column names.
If you don't want them hardcoded you need, as Jeff has already pointed out, to turn every query into a dynamic SQL.
Why?
_____________
Code for TallyGenerator
March 28, 2016 at 9:12 pm
As I mentioned above, I get and store the sys.objects/sys.columns meta data for the remote views first, so I do have the column names. I use that to dynamically create a local #temp table that has 1) a few local status columns plus 2) all the columns from the remote table. The remote tables have a set of fixed columns and a set of table-specific columns. Once I have the data copied to my local temp table, using nested dynamic SQL, I can analyze the data in the fixed columns and store those results in the local #temp table status columns using in-line SQL. Then if my system wants any of the data, it can use the stored meta data to create a local table matching the remote table and store locally the rows of interest from the remote table. Yes, there is a lot of dynamic SQL involved, but given that new views I need data from are constantly being crated in the remote database, I do not have the option to access those objects by hard-coded SQL.
Sincerely,
Daniel
March 30, 2016 at 6:58 pm
JediSQL (3/28/2016)
Yes, there is a lot of dynamic SQL involved, but given that new views I need data from are constantly being crated in the remote database, I do not have the option to access those objects by hard-coded SQL.
But you must know what kind of data you're after.
Those remote views may give you all sorts of things, but you retrieve only specific type of data for your analysis.
I simply cannot imagine analysing some data without having an idea what is the data.
If that's the case then you need a static #table for the data you pull from remote views.
And you need dynamic queries to retrieve the set of data you need from changing views.
The work set of data will be the same every time, so there is no need to change your work table definitions.
_____________
Code for TallyGenerator
April 1, 2016 at 11:02 am
Well, most of the data in the remote views does pass muster, so we end up pulling most of it. Thus, it is more efficient to get all the data in one pull, instead of doing two pulls, once for status and once for data. I have the requirement to minimize my footprint on the remote server.
Sincerely,
Daniel
April 1, 2016 at 11:21 am
how about not using #temp, but creating the table in tempdb instead? then it will still exist.
DECLARE @cmd nvarchar(4000) = '
SELECT schema_name(schema_id) As Schemaname,
object_id,
Name as TableName
INTO tempdb.dbo.StagingTable
from msdb.sys.tables '
EXEC sp_executeSQL @cmd
SELECT * FROM tempdb.dbo.StagingTable
Lowell
April 3, 2016 at 5:28 pm
JediSQL (4/1/2016)
Well, most of the data in the remote views does pass muster, so we end up pulling most of it. Thus, it is more efficient to get all the data in one pull, instead of doing two pulls, once for status and once for data. I have the requirement to minimize my footprint on the remote server.
From my experience - one pull is almost every time way LESS efficient than 2 pulls.
If you make a first call for metadata to figure out where on the remote server is the data you need to pull out and then during the second call you pull just what you need it will occupy way less of resources on the remote server than a single call which is trying to pull everything over and then sort it out locally.
Unless you're a hacker trying to steal data from a server, and you have a single shot on a connection attempt.
Then yes, it's better to minimise the number of connection attempts, as you might get noticed, and pull everything in one go.
_____________
Code for TallyGenerator
April 6, 2016 at 5:06 pm
You could do it with global temporary table and drop it after you've finished with it.
declare @sSql as nvarchar(1000)
declare @GuidTableName as nvarchar(50)
set @GuidTableName = NEWID()
set @sSql='set nocount on;;with cte as (select 1 id union all select 2 union all select 3) select * into ##temp from cte'
exec(@sSql)
select * from ##temp
drop table ##temp
If you need to run the process in parallel, to avoid table name clashes, you could generate the table name from a GUID and later use dynamic SQL to query it.
declare @sSql as nvarchar(1000)
declare @GuidTableName as nvarchar(50)
set @GuidTableName = QUOTENAME('##' + CONVERT(nvarchar(50),NEWID()))
PRINT @GuidTableName
set @sSql='set nocount on;with cte as (select 1 id union all select 2 union all select 3) select * into ' + @GuidTableName + ' from cte'
exec(@sSql)
SET @sSql = 'select * from ' + @GuidTableName
exec(@sSql)
SET @sSql = 'drop table ' + @GuidTableName
exec(@sSql)
I've never used this or seen a reason to use it though.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply