Help with passing a table name to a query in a SP.

  • Hi, I was wondering if the enlightened SQL community could help me with this little problem, or my lack of knowledge on the matter.

    I'm attempting to create a stored procedure that returns the value of one field that exists in a group of tables.  For example, the field is called 'Lock', and is going to exist in a group of tables whose rows must be able to be locked down for long durations - for hours or even days.  So I assumed one method to query if a field had been set was to write a stored procedure that would take a table name and a row id, and return the lock field value.  Something along the lines of:

    CREATE PROCEDURE LockQuery @TableName VARCHAR (25), @RowID INT
    AS
    SELECT Lock FROM @TableName WHERE UID=@RowID

    Unfortunately, SQL complains that I'm passing a parameter into the FROM clause.  I've investigated using sp_executesql and building this SELECT query dynamically, but then I hit the issue of the results generated within sp_executesql are not returned back to the scope of the stored procedure.

    So the question of the day is:  How do I pass a table name to a FROM clause, or to an UPDATE query (as this doesn't like it either).

    Or can someone tell me if this is a common thing, that you have a solution for me, that I ought really to have known better?

    Lots of techie love.

    Martin.

     

  • Martin, here's a little example code. You'll need to change some of the @sql = stuff, but it will run as-is against [SYSFiles] table of local DB. Hope this helps.

    Create Procedure SampleScopeValue_sp

        (@TBName NVarChar(128), @RowID Int)

    As

    /*

    Example Usage:

    Exec SampleScopeValue_sp 'sysfiles', 1

    */

    Declare @sql NVarChar(4000), @RetVal Int

    Set @sql = 'Select Top 1 @RetVal = 1 From ' + @TBName + ' where @RowID = @RowID'

    Exec master.dbo.sp_ExecuteSQL @sql, N'@RetVal Int OUT, @RowID Int', @RetVal OUT, @RowID

    Print 'In SP Context, the value of @RetVal = ' + Convert(Varchar(10), @RetVal)



    Once you understand the BITs, all the pieces come together

  • Thanks so much for that, that's been doing my head in all day.

  • NP, not much happening on Saturday in the formums huh? Thought since I'm at work because of internal net change stuff, I'd check to what's happening here.

    I do not know why the middle of the code snippet became a "link" in the post ??? I guess it thinks it's an email addr. or something, just cut & paste whats there.

    Have fun.



    Once you understand the BITs, all the pieces come together

  • It was the RetVal INT OUT on the SELECT that I was missing out on.   So close, and yet so far 🙂

    Now I've just got figure out how to pass the results back to C# as a table rather than as an OUT parameter, a bit like sp_columns does.........oh the messy web we weave.

  • Martin,

    An "alternative"...

    Usually I prefer to use #Temp table to "move" data between scopes.

    They become an "array" of variables to manipulate data through.

    They are also nice because you can then very natually SELECT, JOIN etc.

    with other tables, permanent or #temp, and manipulate multiple records

    having multiple "columns" of data all at once.

    For me it seems to be more readable. I like to have the complexity in the

    creating of the dynamic SQL, as opposed to in the calling of it.

    Just adding more "stuff" to the code below, while the WWW is "down" at my place

    and I can't post this message....

    I threw in a couple or few little "tricks" I use, that may be helpful.

        #Temp table existance check

        Identity type column with SELECT ... INTO ...

        SELECT ... INTO ... with no records, just to make a table structure

        Use of brackets to make "bizaar" column alias names. "Bizaar" not recommended usually.

    Same thing Martin, SP will run the "example" on most any DB.

    You'll need to modify it for your purposes.

    Alter Procedure SampleScopeValue_sp

        (@TBName NVarChar(128), @RowID VarChar(10))

    As

    /*

    Example Usage:

    Exec SampleScopeValue_sp 'sysfiles', 1

    */

    SET NOCOUNT ON -- allows this SP to be called from VB etc. to return a record set

    Declare @sql NVarChar(4000)

    If Object_ID('TempDB..#Temp') Is Not NULL Drop Table #Temp -- Drop #Temp if it exists, not required

    -- Create zero record #Temp table with Lock info & "record#"

    Select 1 as Lock, Identity(int, 1,1) PseudoRecordNumber Into #Temp Where 1 = 2

    Set @sql = 'Insert into #Temp (Lock) Select [Size] From ' + @TBName + ' where [FileID] >= ' + @RowID

    Exec (@SQL) -- Execute the Dyn. SQL

    Select PseudoRecordNumber, Lock as [Value of Lock (used Size col. for demo) in SP's Scope] From #Temp



    Once you understand the BITs, all the pieces come together

  • What I can't get my nuts around, is the fact that it's so blinking ardous to write a SP that performs queries on a parameter driven table name......

    Is it because SQL likes to know the names of the tables it's accessing so it can build it's execution paths, and hence the language is naturally biased to make my requirements a pain in the back side.

    Admittedly this is the first time since SQL server was released that I've needed to get down and dirty with the SPs but I never thought something as simple as SELECT Lock FROM ? would be so taxing.


    As for the temporary table suggestion, what impact would that have on performance.  The locking of a record that my SP is intended for can last from anything from a tenth of a second upto weeks.  So in the worst case scenario where a record is locked 10 times a second, would this cause SQL a performance hit creating a temp table 10 times, and then dropping it 10 times???

  • Darn-it, just typed 30 mins worth of stuff, hit "post" - all gone

    Will re-write condesed version again in a few....



    Once you understand the BITs, all the pieces come together

  • Ok, I'm writing this in Q/A 1st, then reply & paste.

    Maybe I timed-out earlier ???? This post will be shorter

    mainly because the network guys might have things for me

    to do real soon (Got out-o-here around 3AM this morning,

    back now to continue the "saga").

    Martin, 

       As far as "parameter driven table name" pains, have TSQL

    do all the work... make a table of table names, have code

    generate & create a UNIONed VIEW or other SQL code to work

    on all your tables in one call. This may also help since

    you can work on all 10 of your "objects" at once.

       As for #Temp tables & TempDB work... SQL processes on TempDB

    faster than any other non-TempDB. I think because SQL knows it

    does not have to do the extra overhead work on a "disposable" DB.

    When SELECT ... INTO #Temp is a "long running" process it may

    produce locking issues in TempDB. If you know that it will be

    a long running process, an alternative would be to spend a couple

    of extra seconds to create a new temporary "work" database just

    for your process. I've not yet needed this strategy... I ususally

    have fairly "consise" data in my #Temp tables and have not run into

    any problems in this area.

       Try using the WITH (NOLOCK) table hint to reduce locking issues

    when SELECTing from SQL tables. This can be very benificial if you

    can deal with "dirty" data (yet uncommited data manipulations from

    other processes).

       If found using #Temp tables for "variable" or "work" data to

    be a great way of SQL coding. You get several advantages;

       1. Usually better performance (you can index them, you have

    "arrays" of variables so you can often eliminate loops, etc.

    creating, filling, manipulating #Temp tables seem to be one of

    the fastest things SQL does. Remember it's a database engine first,

    TSQL code and TSQL @variable manipulator second.)

       2. As opposed to @Variables, #Temp tables hang around after your

    code executes (until you close the process scope), letting you

    test and manipulate your "variable" or "work" data as you code,

    and execute just small portions of the script you're coding.

       3. Seems to me, once I'm working with #Temp data, I can rethink

    and recode various strategies and tactics more easily. Added benifit

    is, left-over / obsolete @Variables due to code changes are rare and

    "cleanup" of the code becomes a quicker task.

    There is rarely a generic "best" way to code TSQL processes. There

    are so many factors that can greatly effect perfomance... Very "bad"

    TSQL code can outperform "very good" TSQL code on the same data just

    because of indexing strategy, or "very good" TSQL code strategy can

    be put to it's knees just because of the database layout or even bad

    hardware configurations. Each element can have drastic affects. The

    best thing about varying your TSQL code is, you can test & implemnt

    various strategies an tactics, that may have profound performance

    benifits, very quickly. (Remove an "OR" from your WHERE clause, or

    switch SCSI controllers & Drives? I'll pick the TSQL modification,

    it may provide 10 times the performance benifit than different HD setup,

    and it takes me a second to try.)

    Hope all this helps a little. If you want more specifics, just ask.

    Myself and others on this site are glad to provide info and ideas.



    Once you understand the BITs, all the pieces come together

  • This here will be interesting to you, I guess.

    http://www.sommarskog.se/dynamic_sql.html#Common_cases

    Parameterizing to get generic code is a good approach for everything that is handled at the client, it is the wrong approach when dealing with databases.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply