January 30, 2008 at 7:06 am
Great Post Steve,
Love the SPAM!
Cheers
Chuck
BTW RAC rocks
February 13, 2009 at 12:17 am
Jeff,
Long time listener, first time caller. 🙂 Actually, I used to post here a bit, but not so much anymore.
Anyway, I've been working with SQL 2008 in a professional environment for over a year now, and I've used these table type parameters. Unfortunately, my first example is rather bad, using it as a sort of name/value MUCK table to pass a bunch of parameters through that should really be ONE parameter value instead. But whatever, I can't change that, and this guarantees the set of parameters will be self consistent throughout the entire process. Oh yeah, and the code looks good! 🙂
But the second example is probably better. Let's say you have tables to describe a branching and merging hierarchy, like in a flowchart. Then let's say you wanted to create each flowchart object AND it's relationships with one procedure call.
But then there's the problem; how would you easily populate the data for an object that might have more than one parent? IE: say your sproc created an entity and ties it to zero-to-many parents at the time of creation.
The easiest way to describe this entity is to pass the scalar parameters to a procedure, PLUS a table type parameter to hold a list of the object's parents. These can then all be inserted to the database at one time and with one procedure call.
Perhaps there's a better way to do this that I am missing, but that seems to be at least one valid reason. I can imagine other times when you'd want to bundle up multiple children along with a parent as well, and these table type parameters let you do that rather elegantly.
I still insist that tables could ALWAYS be "passed" to procedures, though (we do it with these newfangled things called "tables", lol). But this is more like a "private read only instance of a table"; an odd idea in the RDBMS space, a foundational idea in OO space, and I THINK useful everywhere.
Signature is NULL
February 13, 2009 at 12:47 am
To answer Mr. Moden's question about why on earth anyone would need to pass a table to an SP from an application:
I've seen and worked on at least a few applications where a user of the application can add or update several things at a time. For example, they're a manager and one of their employees got hit by a beer truck on the way in to work that day. It's the last day of the month, and the manager needs to approve the vacation time everyone's taken. Now, a good employee would have entered their vacation time as they took it, but several employees, including Mr. Hit-By-A-Beer-Truck, aren't good employees, and they wait until the last day of month to enter it all at once. Since Mr. HBABT didn't make it in to work on the the last day of the month, his vacation time will have to be entered by Ms. Manager (which she has the authority to do). Ms. Manager would love to make it a rule that employees who wait until the last day to enter vacation time will be flogged, but since she can't get upper management to agree to that, she ends up having to enter a bunch of vacation time at the end of each month.
Rather than entering and saving each vacation time entry, she'd like to be able to quickly enter many vacation time records and then save them all at once, so that's what the developers did; they created a screen where she can quickly create a list of vacation time records and them save them all at once.
The developers would like to just pass in a table containing all the vacation time records to be saved, but since they're using SQL Server 2005, they can't. So they loop through the list and save each item individually, all the while feeling that the screen could be a bit snappier if they could make just one call to the SP instead of a billion.
Now, that situation, regardless of how silly it may be, is similar to situations I've found myself in as a developer. Typically it doesn't make much difference as the number of records to be saved is low, but it would be nice to have the option of sending everything in at once. Not having the option to do so, I've not really thought about the pitfalls of such an approach, so it may well be that saving each record individually is the better route.
So, what do you think? Does that make sense, or am I completely off the mark?
February 13, 2009 at 7:32 am
Hi Jacob,
I tried the same code.
CREATE TYPE ItemInfo AS TABLE
(
ItemNumber VARCHAR(50),
Qty INT)
and I got the following error:
Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'AS'.
Then I tried the following query
SELECT * FROM SYS.TYPES WHERE is_table_type = 1
I got the below error:
Msg 207, Level 16, State 1, Line 1
Invalid column name 'is_table_type'.
What could be the issue? Did I miss something. I am using SQL Server 2005 only.
Shaiju C.K.
_____________________________________________
One ounce of practice is more important than tonnes of dreams
February 13, 2009 at 7:44 am
Hi,
This article is based on a new feature introduced in SQL Server 2008. It is not available in SQL Server 2005.
regards
Jacob
.
February 13, 2009 at 7:51 am
This is pretty neat, but what I've REALLY been looking for for ages is the ability to pass an entire .Net dataset to a proc. Of course, I've been kind of doing this by passing an xml string representation of the dataset to the proc via the xml datatype, but this grinds to a crawl with really large datasets.
February 13, 2009 at 8:17 am
I for one vote FOR the feature. I have lost the count of how many times I have ended using UNION ALL constructs from the stored procedures parameter lists to build a table for filtering, Updating or deleting stuff on the server.
Just my $0.02
* Noel
February 13, 2009 at 8:55 am
From my experience in OOprogramming, the data model doesn't always directly correlate to the object model of the business tier. I know somewhere in there lies the answer but I can't think of it off the top of my head.
I know this isn't an example of what I mentioned above, but it could have some bearing on the discussion at hand.
Say you have a Client object and Profile object defined where one client can have several profiles. Now typically you would have your stored procedures designed to insert/update/delete each table in the database which would probably be Client and Profile (for example).
Say we are working in .Net using ADO objects. Typically you encapsulate the updates into each object where the Profile object has an insert/update/delete method to call as does the client. In this case the processing would be as follows:
Call client update method in .Net. It would check if there are any profiles and say it finds 10 that need updating. You would then call the Profile objects update method 10 times. This method call would open a connection, do the update and close the connection. That is 10 connection opens, 10 updates, 10 closes. It would then finish off by open a connection, updating the client and then closing that connection. This ends up being 11 of each including the client update.
Using a table as a input variable would allow the following. You now create a new stored procedure that is meant to update the both the client and all the profiles. The procedure would take in the client parameters and a table parameter containing all the profile information. That procedure would then call the initial client update procedure and then update the 10 profiles calling the profile procedure. ON the .Net end you have opened a connection once, called one stored procedure and close once.
February 13, 2009 at 10:22 am
Jeff Moden (9/20/2007)
Thanks John and Rog... much appreciated!
You may be using the wrong layer to do database things in. The type of activety that you've both described should probably be done in the database by the database... ...
I have recently been trying to get back into the SQL Server world after a long break! I thank you Jeff - for your question - I was puzzled by its applicability (is there such a word ???) as well! Thank you Adam and Kevin - your answers helped clarify it for me considerably.
As a suggestion - I think articles like these should not only tell the readers how to use a new feature but also address their usage in real world applications/scenarios! 🙂
**ASCII stupid question, get a stupid ANSI !!!**
February 13, 2009 at 10:34 am
As a suggestion - I think articles like these should not only tell the readers how to use a new feature but also address their usage in real world applications/scenarios! 🙂
Couldn't agree with you more on the real world scenarios when introducing stuff like this. I know I sometimes have issues grasping certain concepts without the real world scenario. Just one solid example and all the lights start flashing. Not sure my example above is the best example though....I will see if I can think of another as I know I have had a need for it.
February 13, 2009 at 11:21 am
I have been passing table names in SQL Svr 2000 for a couple of years, now. Perhaps, this isn't what you are taslking about but the following Stored Procedure works.
CREATE PROCEDURE zstp_BomCheck
@tablename varchar(50)
AS
DECLARE @Select varchar(1000),@component varchar(9)
Set @component = 'Component'
SET @Select ='Select * From ' + @tablename + ' Order By ' +@Component
Exec(@Select)
return
GO
February 13, 2009 at 11:24 am
Gherry Taylor (2/13/2009)
I have been passing table names in SQL Svr 2000 for a couple of years, now. Perhaps, this isn't what you are taslking about but the following Stored Procedure works.CREATE PROCEDURE zstp_BomCheck
@tablename varchar(50)
AS
DECLARE @Select varchar(1000),@component varchar(9)
Set @component = 'Component'
SET @Select ='Select * From ' + @tablename + ' Order By ' +@Component
Exec(@Select)
return
GO
Maybe you are just showing an example, but just a comment on it. The one thing I don't like about dynamic sql such as this, is the fact you have to give the user running the stored procedure not only permissions to the stored procedure but to the table as well. It defeats the purpose of trying to limit users from be granted access to tables directly.
February 13, 2009 at 11:44 am
sushila (2/13/2009)
Jeff Moden (9/20/2007)
Thanks John and Rog... much appreciated!
You may be using the wrong layer to do database things in. The type of activety that you've both described should probably be done in the database by the database... ...
I have recently been trying to get back into the SQL Server world after a long break! I thank you Jeff - for your question - I was puzzled by its applicability (is there such a word ???) as well! Thank you Adam and Kevin - your answers helped clarify it for me considerably.
As a suggestion - I think articles like these should not only tell the readers how to use a new feature but also address their usage in real world applications/scenarios! 🙂
I'm so very happy to see you again, my ol' friend. I don't know about anyone else, but I sure miss the sparkle of your posts. 🙂
Shifting gears... I think that it's something people don't do often enough... asking "Why". I don't ask it to be a smart aleck or give anyone a hard time. I really want to know because, sometimes, there's a better way but you can't know that unless you understand the reason "Why".
One of the good folks that took the time to answer my year old question (this is a republished article) said someone wanted to enter a bunch of stuff and do a single save. Good idea... but, what if the lights go off after entering data for 45 minutes? 😛 The data better be available when the lights come back on (like a recovered document in Word) or someone is going to hate your product. Either that, or force someone to save every record. Protect the user.
Within SQL Server, I see no reason to pass tables from sproc to sproc even for the hierarchical example someone else was good enough to take the time to post. I only see such a need between an app and the database. The perceived need to pass a table from sproc to sproc is usually caused by some form of RBAR as the case usually is when someone writes a table valued function (for example).
--Jeff Moden
Change is inevitable... Change for the better is not.
February 13, 2009 at 11:50 am
What I've been doing for years is to create a temporary table, load it up with records and call a stored procedure that expects the temp table to exist and populated.
For example:
-----------------
create table Inventory (ItemID int, WhseID int, StockQty decimal(16,8))
Insert Inventory values (1, 1, 0)
Insert Inventory values (2, 1, 0)
Insert Inventory values (3, 1, 0)
create table #ItemsToProcess (ItemID int, WhseID int, Qty decimal(16,8) )
Insert #ItemsToProcess values (1, 1, 10)
Insert #ItemsToProcess values (2, 1, 5)
Insert #ItemsToProcess values (3, 1, 7)
go
Create proc spUpdInventory as
begin
-- Make sure temp table exist.
If object_id('tempdb..#ItemsToProcess') is null
return
update i
set StockQty = i.StockQty + itp.Qty
from Inventory i
join #ItemsToProcess itp on i.ItemID = itp.ItemID and i.WhseID = itp.WhseID
end
go
-- Execute the proc. Get a before and after picture of the Inventory table.
select * from Inventory
EXEC spUpdInventory
select * from Inventory
RESULTS
ItemID WhseID StockQty
-------- -------- ---------------------------------------
1 1 0.000000
2 1 0.000000
3 1 0.000000
(3 row(s) affected)
(3 row(s) affected)
ItemID WhseID StockQty
-------- -------- ---------------------------------------
1 1 10.000000
2 1 5.000000
3 1 7.000000
(3 row(s) affected)
February 14, 2009 at 7:54 am
Mick Dugan (2/13/2009)
This is pretty neat, but what I've REALLY been looking for for ages is the ability to pass an entire .Net dataset to a proc. Of course, I've been kind of doing this by passing an xml string representation of the dataset to the proc via the xml datatype, but this grinds to a crawl with really large datasets.
Try this...
http://www.sqlservercentral.com/articles/T-SQL/63003/
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 31 through 45 (of 50 total)
You must be logged in to reply to this topic. Login to reply