May 10, 2004 at 2:00 pm
Folks,
This one is deceptively hard when you get into it. Or maybe I'm just easily deceived! Here we go...
We have about 20 dbs on our server, all with the same structure -- plus 1 db (call it "dbMain") for general management data. Assume each db has a table called Clients, with the same structure. End-users need to occasionally move a client record from one db's Clients table to another db's Clients table. By "move" I mean, we'll mark the original record in the source db to "inactive" and create a new record in the target db.
We won't move the record intact though -- we have to scrub some values, set some back to defaults or NULLs, etc.
Good news: We do already have a universal ID for each client that is unique across all dbs! This stays intact when a client is moved.
Assume the source and destination dbs can be any arbitrary combination of the 20 dbs.
The task... Write a stored procedure that takes three parameters: the unique client ID, the name of the source db and the name of the target db. The SP will then do the work of moving the client record over from one to the other.
My preferences for the solution:
Sounds pretty simple, right? You'd be inclined to have your SP do something like this:
Select * into #tmpClient from db1.dbo.Clients where ID = @idClient
...and then scrub the data, then finally do an INSERT or UPDATE on the target db (depending on whether the client already exists there).
The problem is our old friend, dynamic SQL. You can't do the above command because we can't hard code "db1" in as the source db name. We pass that db name in as a parameter, so we end up with something like this:
declare @sql as varchar(500)
set @sql = 'Select * into #tmpClient from ' + @dbSource + '.dbo.Clients where ID = ' + @idClient
exec( @sql )
The problem is that the #tmpClient table created is out of scope after the exec( ) command finishes. So you can create/load it inside the exec( ) command, but then can't do anything with it afterwards -- it vanishes.
So can I use a table variable? Something like...
declare @tmpTable table (
[field defs...]
)
We could, but again, I don't want hard-coded field definitions. Five times a year, we change the structure of the table...and this will always break this SP.
Hey, how about we generate those field defs from database metadata? We could use information_schema.column, for instance, to list the fields and then use...um, dynamic SQL to... Right, same problem. We'd end up creating our table variable with an exec( ) command, and it doesn't exist the moment we're done.
Fine, great. Can we just create a normal table on the physical system that we drop as soon as we're done? Yes, we can. We'll give the damn thing some unique suffix to prevent collisions between simultaneous users. Ugly, but workable.
But am I forgetting some other option? This seems so painful, having to use physical tables and unique suffixes...
Thanks in advance,
- Tom
May 11, 2004 at 2:40 am
Another solution is to create the temporary table outside the dynamic SQL and insert the data with INSERT (instead of SELECT INTO), like this:
Select * into #tmpClient from AnyDB.dbo.Clients where 1=0
declare @sql as varchar(500)
set @sql = 'INSERT INTO #tmpClient SELECT * from ' + @dbSource + '.dbo.Clients where ID = ' + @idClient
exec( @sql )
Razvan
May 11, 2004 at 8:07 am
Rockmoose,
Thanks for the suggestion. I considered that too, and it has some attraction. But a globally-visible temp table brings up the same issues as a plain old table that's saved to the hard drive. If two users performed the same operation at the same time, they could collide, because they would both see the same global table. Therefore, I'd have to give each a different name, using some unique suffix for instance. At that point, I might as well just use a plain old table. The only advantage of a global temp table, in the end, is not needing to issue a DROP TABLE command at the end...which I would probably do anyway because I'm a little obsessive-compulsive that way! 😉
I appreciate the input.
- Tom
May 11, 2004 at 8:47 am
Have you considered using an ActiveX Script in DTS? This would allow you to set the variables for the tables each time you run the job, and handle the scrubbing task in memory.
Cheers,
John R. Hanson
May 11, 2004 at 9:05 am
Razvan Socol,
Thanks -- I tried that too. It does get me past the first hump, but then slams me into the next. The first hump is the scope of the temporary table -- and your method bypasses that. The exec() command does see the temp table, and can perform an INSERT into it. Cool. But...the tables in question have primary keys, and the INSERT INTO command needs you to get very specific in that case. So this line...
INSERT INTO #tmpClient SELECT * from ...
Really needs to be this line...
INSERT (fld1, fld2, fld3, ...) INTO #tmpClient
SELECT (fld1, fld2, fld3, ...) from...
...because INSERT INTO for tables with PKs needs all the field names layed out. So suddenly my script is locked into a specific field names again. However...hmmm...
I might try the code below, to generate the field-name list dynamically with each run. I love this little trick (learned here on sqlservercentral.com, by the way!). No looping, but a kind of implied recursion...
declare @fldList as nvarchar(2000)
set @fldList = '' -- set to a blank, or the default NULL fungoes everything up
select @fldList = @fldList + column_name + ','
from information_schema.columns where table_name = 'users'
set @fldList = substring( @fldList, 1, len(@fldList)-1 ) -- trim the last comma
print @fldlist -- returns a comma delimited list of field names
Anyway, thanks for getting me thinking again!
- Tom
May 11, 2004 at 9:12 am
John Hanson,
Thanks for the DTS/ActiveX suggestion. That's not an entirely implausible suggestion... But let me round out the situation, and maybe others can let me know if I could make it work.
This whole application is ASP (classic) driven. It's on an IIS webserver, with the usual anonymous user permissions in effect. Is it possible to fire off a DTS package with an ASP call?
I suspect it's not possible. DTS's have so much power potential, it would likely be a complicated business to make sure the IIS user could never do something harmful. And because they don't exist inside a particular database -- they seem to stand alongside the databases -- I don't know how to control access.
Can you pass parameters to a DTS package? Maybe that's not a problem -- I could set up a central table that it read for the three key parameters, I suppose...
Interesting notion! Shooting from the hip, I doubt we will end up using this technique, as it forces our IT team to grapple with a whole new way of handling/manipulating our databases...when stored procedures aren't really strictly ruled out yet.
- Tom
May 11, 2004 at 9:38 am
Here are some sources of DTS information if you decide to investigate John's suggestion further.
Executing DTS from ASP: see http://www.sqldts.com/default.aspx?207
Control access to DTS packages with owner and user passwords. see "Handling Package Security in DTS" in BOL.
You can pass parameters to a DTS package by using a Dynamic Properties Task and global variables. See both in BOL.
Greg
Greg
May 11, 2004 at 11:26 am
I think that you mean that "the tables in question have identity columns" (normal primary keys should not be a problem). In this case, you might consider using SET IDENTITY_INSERT.
Razvan
PS. Have you ever considered to redesign by combining all the databases in one database ? If there would be a single database, the entire task would be reduced to a simple update of a column in one table. Of course, this has a big disadvantage: you need to rewrite some parts of the application so that the users sees only the data they need. But if this problem occurs often, (and more importantly, if there could be other tables that have a similar problems) it worths some time to consider this approach.
May 11, 2004 at 11:36 am
Razvan,
Yes, thanks -- it's a matter of Identity columns, not primary keys. And yes, of course, because identity columns are involved I'd need to use SET_IDENTITY_INSERT ON/OFF, -- but that doesn't remove the need to explicitly name all the fields, which was the real issue.
re: considered redesign...combine all databases into one?
A pleasant thought, but no, it's not a realistic option. It would require recoding several hundred ASP pages, rebuilding the DB from the ground up, rewriting scores of SPs, and then launching into a re-testing phase that would last half a year... Plus, my client is contractually obligated to their clients to keep all their data physically isolated from all others.
Thanks,
- Tom
May 11, 2004 at 12:39 pm
Hi,
Since You are already using dynamic sql for this, generating explicit field names should be a no problem :-).
I would go for Razvans suggestion and use the nifty "field name generator" code snippet.
Best of Luck!
/* no need to remove trailing comma.....
use pubs
go
declare @fldList nvarchar(2000)
select @fldList = isnull(@fldList + ',','') + column_name
from information_schema.columns where table_name = 'authors'
print @fldlist
*/
/rockmoose
You must unlearn what You have learnt
May 11, 2004 at 6:45 pm
More on the DTS subthread,
you can export the (ANY) DTS job as a visual basic file, then create an active x dll by referencing the Microsoft DTS Package Object Library and importing the DTS BAS file. change the Main function to Public, and then you can call it by referencing your dll in ASP and using the createobject syntax. Since you would use a COM object, you don't have to worry about the IIS permissions as much, because you can assign the permissions at the COM level seperately. I wrap all Stored procedures in COM for ASP use. That way you can explicitly deny ANY sql access from the IUser or IWAM accounts.
Cheers,
John R. Hanson
May 11, 2004 at 9:38 pm
Rockmoose and John Hanson,
Thanks, all. As it turned out I found I could make the SP work by using the dynamic field-name generation technique. The one part about it that's a little disappointing is that I need to have an actual known db to point to, to do the @fldList thing. That is, I have @strDbFrom and @strDbTo as parameters, but I cant use those to generate the field list. To do so, I need to parse it into the "information_schema.columns" portion of the query, and then run it in an exec() command. So I end up making my #tmpTable inside an exec() command. And a temp table made inside an exec() command doesn't exist after the command is done...
Luckily, all our dbs are based on the "model" db, and I know it's always present. So I can work from that to make my temp table.
Thanks all!
- Tom
May 12, 2004 at 3:00 am
Going back to the temporary global table idea would it not be possible to do something like:
declare @temp varchar(36)
declare @sql nvarchar(1000)
set @temp='##'+replace(convert(varchar(36), newid()),'-','_')
--would give you a GUID table names such as:
-- ##D95A400E_167C_479D_8C3F_EC30DE9BCA
-- ##C58633B9_70F4_4B53_829C_42E8E3B2AC
-- create your dynamic sql bit using GUID table name
set @sql=’insert into ' +@temp +’select * from Clients where 1 = 0’
--this creates a blank table of the correct structure you can now insert into it any data you need
exec sp_executesql @sql
Don’t forget to drop the temporary table!! Each time the sproc is called it would create a uniquely named temporary table that would not be used by anyone else.
Not elegant but there you go.
Best regards,
Andrew McKeon
May 12, 2004 at 8:12 am
Thanks, Andrew -
Good idea, using newid() to generate a unique name. That's a nice solution, and I may use it elsewhere but it's a solution to a problem rather secondary to the rest. Creating a unique name for the table isn't that much of a challenge. (The data to be transferred includes a unique ID for each Client, so I could just use that as a suffix, for instance.)
And having a global temporary table is (in this case) operationally the same as having a real physical table, so I'd probably just do that instead.
Thanks!
- Tom
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply