September 13, 2007 at 7:31 am
This is long overdue. I've had this functionality in Oracle for the past ten years (since Oracle 8.0)! When Microsoft introduced the table variable, they never finished the job.
I also noticed that the persistent table definition via the CREATE TYPE statement is the same as Oracle's.
I recently needed this functionality as I had a small 2-dimensional array (table variable, NOT a #temp table) that needed to be processed by other logic which would then return a table result. This 2-dimensional array was used in several procedures. In the interest of good OO programming, a UDF taking in a table-valued parameter and returning a table was the obvious choice. Sadly, I couldn't do it in SQL Server 2005. To use persistent (#) temp tables and/or transform it into XML and then shred it again inside the function was out of the question due to coding and performance overhead.
So I coded the logic in-line within the procedures (performance being the governing factor) with suitable coding comments to refactor the code when SQL Server 2008 is deployed (Q3/Q4 2008).
September 13, 2007 at 9:28 pm
JohnG,
What was in the array? Might help me understand why people would need to pass such a structure to a proc or function... thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 14, 2007 at 6:59 am
Can you give me an example of using this technique from code. This is a great advantage in parameter use but I need to use it in Data Access Layer in C#, not really from SQL calling another SP.
September 14, 2007 at 5:16 pm
This is a great advantage in parameter use... |
Why? Why do you need to pass a table as a parameter??? Someone please give me an example of an array that would be passed from a GUI to a table in a proc!
--Jeff Moden
Change is inevitable... Change for the better is not.
September 14, 2007 at 9:55 pm
Jeff,
Disclaimer - this post has *nothing* to do with RAC
It really is quite simple in the arena of application development.
The meaning of a procedure may be applicable to more than a single table. Because of the underlying nature of sql a procedure is usually associated with a particular table. If two tables are candidates for the same outcome (meaning) this usually results in two procedures that do the same thing for 2 different tables. Now you can only pass a table as a parameter to a procedure if the table is a 'variable'. But tables are *not* variables in sql, they are static structures, ie. values/literals/files. So it is necessary to resort to crude workarounds that simulate the idea of a table as a variable, ie. dynamic sql, xml or the Katami idea of a table 'parameter' which is more silliness. The following articles will, I hope, bring the big picture home (assuming they are actually read :
http://beyondsql.blogspot.com/2007/09/dataphor-all-tables-are-typed-variables.html
http://beyondsql.blogspot.com/2007/06/dataphor-13-passing-table-as-parameter.html
http://beyondsql.blogspot.com/2007/08/dataphor-creating-super-function.html
Just because someone is an expert sql programmer does not mean they understand (nor the implications) of the computer science
(types, variables, and values) of that which they are doing. And just
because the 'compute science' of sql is not discussed in Bol certainly
does not mean it doesn't exist. That is the genealogy of sql - a language described as a mile wide and its foundation left an inch deep -
S - structured
P - programming
A - absent
M - methods
September 17, 2007 at 1:23 am
Hi all,
we can pass an ADO.NET DataTable from the FRONT END application to a stored procedure which takes a TABLE TYPE as an argument.
I am putting up a sample application which I will post here shortly. I guess this will answer some of the questions asked in this forum.
Jacob
.
September 17, 2007 at 7:19 am
Thanks for taking the time for the explanation, Rog...
But... being able to pass tables to stored procedures, with the understanding that you will have multiple tables with the same structure, also means that you may have a poorly designed database to begin with.
I just don't understand why someone would need to pass a table's worth of information from GUI to stored procedure.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2007 at 7:57 am
Jeff,
In answer to your question regarding my use case for needing to pass a table to a function here is a summarized example:
In our application individual Users or Groups (containing one or more Users) have access permissions to a variety of "things" (we call them "objects"). Each "object" is uniquely identified by a GUID (data type uniqueidentifier). Each "object" has a "type" identifier (enumeration) with a data type of INT.
Consequently, in our stored procedure code we commonly use a table variable like:
DECLARE @v_objects_tbl TABLE (object_id uniqueidentifier, object_type_id int);
We'll populate this sort of table variable as needed in our code, particularly when the set of values (rows) needs to be referenced multiple times by the procedure's logic. I.e., it is commonly used as a cache within the procedure so that, for performance, the query doesn't have to be executed more than once within the scope of the procedure. The number of rows is generally quite small.
Recent use case for the enhancement:
Before deleting a User or Group (via two separate procedures) we need to determine what objects the about to be deleted User or the about to be deleted Group has a specific permission on (array #1). After performing the deletion (via an actual DELETE which kicks in FK cascade deletes, etc.) we need to check each of the entries in array #1 to ensure that at least one other User or Group has that specific permission (this should be determined by a common function). Those that fail the test are populated into array #2 (which would be returned by this common function). If array #2 is not empty, the DELETE is rolled back to the Save Point. The (user-friendly) details of the items in array #2 are returned by the procedure as a result set. Ideally, this result set would be returned by a common procedure accepting array #2 as an input parameter.
True, there are ways of getting around the restriction to utilize common code. However, as I stated in my previous post, performance is the governing factor.
September 20, 2007 at 3:28 pm
Jeff,
When tables are 'variables' you can create what I called
a 'Super Function'. Pass any table of the appropriate 'type'
to the function, it's that simple. The example was a function
that returned a dense rank including when the target column
of the rank repeats. The function assumed a single primary key.
And only the PK and the target column of the rank constituted
the table to be passed to the function.
http://beyondsql.blogspot.com/2007/08/dataphor-creating-super-function.html
Well it's a simple matter to overload the function for tables that
have a compound primary key. And you can envision a library of different functions based on table 'type'.
http://beyondsql.blogspot.com/2007/09/dataphor-super-function-ii.html
This concept is applicable universally, independent of any particular database.
September 20, 2007 at 8:16 pm
Thanks John and Rog... much appreciated!
You may be using the wrong layer to do database things in. The type of activety that you've both described should probably be done in the database by the database... not in either the presentation or business layers.
Yeah, I know... everyone's doing it... kinda like Lemmings... then they wonder why performance is so bad. I really think you might be doing this type of stuff in the wrong place...
Lemme guess... your objects do "SELECT *" ...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 20, 2007 at 11:32 pm
Jeff,
By your training, your education, your professional work, your use of MS technologies, in fact everything in IT today still revolves around the historical client-server model in one way or another. The vocabulary of IT reflects this, ie. database as distinct from presentation as distinct from business layer. The idea of 'tiers' is the same abstraction. As a result not only are there distinct technologies associated with the physical breakdown of the client-server model but their are logical anomalies with the model as general rules are never set in stone (what logic goes where). Unfortuneately all the vocubulary is grounded in the model. Even the significant historic problems that result from the attempted communication between disjointed technologies (net->sql), ie. impedance mismatch.
Given this history there is a true lack of vocabulary to describe
a system that is intended to do what the client-server model does but
does not fit within that model. In the system I blog about there is
*only* the database. There is no technology that can be labelled the
'presentation' layer, a layer that is a 'distinct' technology from
the database technology. There is no distinct technology that can be
called a business layer that is a distinct technology from the
presenation and database technologies. Presentation, instead of
a unique technology, is an *inference*, something derived by the
system (database) itself from various objects (table,constraints,references,views,procedures etc). There is no 'client' technology that understands complex types (arrays,lists etc) as distinct from the database technology. There is only a 'database' that understands and can operator on complex object types and hence there is no such thing as a 'net' type vs. a database (sql) type and hence there is no such thing as a channel of 'communication' been two distinct technologies. There is only a single system, a *database*, that accomplishes all the things that client-server technologies
do but in a much different (and much simple and efficient) way.
The logical questions of client-server even morph. There is no such
question as to 'where' to put logic, ie. in the database or in the
business layer as there is no such distinct entity as a 'business' layer
vs. a database layer. As I hope you can imagine, it is no trival matter
to communicate these ideas, let alone illustrate new problem solving
techniques so they make sense to many users.
So when you state:
>The type of activety that you've both described should probably be done
>in the database by the database... not in either the presentation
>or business layers.
You are correct! As there is only 'the' database! 🙂
Of course some may guess that there is an adjective for the database
I am describing. And there is, it's a 'relational' database. And we have
little experience with a true relational database. But I'm trying 🙂
Hopefully down the road, as I write new material these ideas
will make more sense. I hope you and others will bear with me 🙂
best,
September 21, 2007 at 1:46 pm
Jeff,
A bit of misunderstanding by my use of the term "object" (I knew it would be read wrong). It is not an "object" in the OO sense. It is just a "thing" (document, file, report, template, etc.) that we track in our database. I should have used a column named "thing_id" 😉
And yes, all of the logic that I described above (i.e., the "heavy lifting") is being done within the database using SQL queries (as that is where it needs to be done). The BLL layer calls a method like User.Delete(params) which in turns calls a DAL layer (connections, etc.) which calls the stored proc. As described above, a "middle tier" in a 3-tiered architecture is actually the "client" in the former client/server ("fat client"/database) model.
So the ability to pass a "set" of items from a BLL layer to the database layer or between code units within the database layer is a common need. Prior to this new feature in SQL Server 2008, it has been quite kludgy.
September 21, 2007 at 8:15 pm
Heh... at least you said "set"...
Ok, thanks, guys... I've gotta study this a bit.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 10, 2007 at 2:21 am
hey jacob
reallly good article.
keep it up.
Manish
December 13, 2007 at 8:54 am
In this thread, and others, there has been, often heated, discussion around "why would anyone have to pass an array (table) of values to a stored procedure?" Along with "why are you passing a delimited list as a string to a stored procedure?".
The discussion has even touched on "bad database design", why do we need such a feature, etc.
Well, here is a very simple case just using Microsoft's own SQL Server 2005 Reporting Services:
In SQL Server 2005 Reporting Services (SSRS), parameters can be Multi-value. I.e., a "pick list". The "pick list" values can be either hard-coded or populated by data from the database.
When the Report is run, the list of values that the user has selected from the "pick list" needs to be provided to the query. This is accomplished in one or two ways:
1. Build, via string concatenation, a dynamic SQL "query from hell" within the SSRS Dataset Query object. Take the "pick list" values and construct an "IN" list (or other sub-query, join, etc.) while building the query. UGH!
2. Call a stored procedure to do the work. The proper solution. However, one still needs to pass the (array) of "pick list" values to the stored procedure. Without having the ability to pass an array of values, the only solution is to pass them as a delimited string. SSRS even provides the function to construct the delimited string:
=Join(Parameters!MyParameter.Value, ",")
So, it would appear that Microsoft has finally seen the light and corrected this major functionality defect. I wonder if using their own product internally (insert your favorite industry quote here) has finally gotten the SQL Server team to finally implement the feature.
Viewing 15 posts - 16 through 30 (of 50 total)
You must be logged in to reply to this topic. Login to reply