June 18, 2008 at 11:27 am
Good afternoon:
I would like to know if there is a limit for the amount of rows into a table variable (table var used inside a stored procedure to store some data) and what's the limit if it exists.
Environment:
SQL Server 2000
Thanks a lot.
June 18, 2008 at 2:46 pm
Same as normal tables I guess. (available storage)
They are only recommended when the expected number of records will be low.
(due limitations like no indexing ...)
June 18, 2008 at 3:07 pm
June 18, 2008 at 3:53 pm
If total amount of data in your table variable exceeds 1 page (8k) you're in trouble.
_____________
Code for TallyGenerator
June 20, 2008 at 4:04 pm
Why would 8k be a trouble?
June 20, 2008 at 10:08 pm
Table variables are NOT memory only. Of course, don't take my (or anyone elses) word for it... read the following article about how similar Table Variables and Temp Tables are in that they both use memory and spill into TempDB if they get too big...
http://support.microsoft.com/default.aspx?scid=kb;en-us;305977&Product=sql2k
There's also some serious drawbacks to using Table Variables... those are covered in the article, as well. READ the article! Especially Q3/A3 and Q4/A4.
So, to answer the original question, like Temp Tables or any other table, Table Variables can be made to consume the entire hard-disk...
...and since Table Variables cannot be made to use statistics, you might wanna do like Sergiy said.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 23, 2008 at 8:43 am
Thanks for the reply Jeff.
Reading the article helped me to get somethings clear, but I still found the "significant volume of data" in the "In general, you use table variables whenever possible except when there is a significant volume of data and there " statement, and like no numbers where shown here I still have my doubts.
Thanks again.
June 23, 2008 at 9:01 am
Yeah... I know... Microsoft is pretty much non commital when it comes to stuff like that because the amount of memory and the I/O system will make it vary quite a bit from machine to machine.
I also believe the recommendation to use table variables as much as possible is wrong... like everything else, it depends. I've seen where the use of table variables was either a "god-send" or it just crushed performance... and, for me, it's usually crushed performance or just been as good as a temp table. For me, it's not really been worth taking the chance and or taking the time figuring it out evey time I think I could use one... about the only place I use table variables is in User Defined Functions and the only reason why I use them there is because a temp table won't work.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 23, 2008 at 9:51 am
The main advantage I've seen for table vars is that they don't cause a stored proc to recompile.
As for number of rows, it depends what you're going to do with the var. If it's just there to have logging data written to with a select * from @... at the end of the proc, it'll probably be ok no matter how many rows.
If it's going to be joined into other tables in a query, then I'd say no more than 100 rows, due to the cardinality estimates that will come about from its lack of statistics.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 23, 2008 at 10:15 am
GilaMonster (6/23/2008)
The main advantage I've seen for table vars is that they don't cause a stored proc to recompile.
That's pretty important for GUI code... which I normally don't write. For the batch code that I write, a recompile is likely going to happen anyway just because of the changes that have occurred to the other data.
Also, you can help prevent recompiles even on the GUI side of the house by declaring the Temp Table way before you do any DML. BOL recommends not mixing DDL in DML because it will (can) cause a recompile. So, if you declare the Temp Table up front with all the variables and stuff like that, you might not get recompiles...
--Jeff Moden
Change is inevitable... Change for the better is not.
June 23, 2008 at 10:19 am
Yup, though you can still get a stats-based recompile from the temp table, sometimes even more than once.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 23, 2008 at 10:29 am
GilaMonster (6/23/2008)
Yup, though you can still get a stats-based recompile from the temp table, sometimes even more than once.
So... wouldn't that be a Catch-22? Optimizer says "I need a stats-based recompile to do this quickly" and the same code with a table variable would just run slow?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 23, 2008 at 10:35 am
Depends. I've seen some cases where the slightly less accurate plan was better than a recompile of the entire proc (SQL 2000). On 2005 with statement level recompiles, would probably be better to accept the recompile.
The threshold for a temp table to get a stats update is very small if it starts empty. Can't recall exact value now.
It's one of those 'test carefully both ways' things
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 23, 2008 at 10:40 am
Gail - in 2005, if you knew it was going to recompile - would you FORCE the recompile on a given statement?
It is theoretically supposed to speed things up by signifying to not even bother caching. I just haven't played with this enough to know if this amounts to anything or not.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
June 23, 2008 at 8:11 pm
Jeff Moden (6/23/2008)
...So... wouldn't that be a Catch-22? ...
More of a "Cache-22" actually ...
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy