March 5, 2011 at 6:26 pm
At what point should I decide not to use a table variable and go with a temp table instead?
March 6, 2011 at 12:42 am
Personal rule of thumb, which is subject to any number of exceptions:
I'll swap to a #tmp when I desire two or more indexes on the same temp table or at over 2000 rows. The reason for that is you'll see a difference between the statistics which affect estimated row counts.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
March 6, 2011 at 1:52 am
The best article I came across so far: http://www.sqlservercentral.com/articles/Temporary+Tables/66720/
March 6, 2011 at 5:14 am
There just isn't a single right answer. In general terms, I'd say less than 1000 rows is something to shoot for, depending on how you're using the variable. However, I had a situation once where we were loading about 5k rows into a table variable because we found that the recompile savings more than offset the scan that occurred when we referenced the table. It really depends on your situation and the processes involved. Test, test, test.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 6, 2011 at 7:44 am
ebook SQL Server Statistics by Holger Schmeling (free download at red gate http://www.red-gate.com/our-company/about/book-store/ )
stated about 100 rows. ( page 24 )
Of course, as always ... test it, test it, test it
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 6, 2011 at 10:10 am
I've found it depends on the servers subsystems. On some server the benefit of using a table variable over the IO required to create a temporary table is quite significant. In many cases I've found myself using them with row counts over 5000. There are a number of other factors such as how you intend to use it etc.
As previously stated, test it and tune it!
March 6, 2011 at 11:47 am
Both table variables and temp tables are created in tempdb and are kept in memory if it is possible. But:
Temp table has statistics and query plans are better (sometimes much better).
This can be important for relatively small tables (1000+ rows) if you have complex query.
Temp tables can be processed in parallel. Table variables are not.
This can boost your query is you split your tempdb across many physical disks and you have many cores/processors. But it make sense for 50K+ rows.
Creating index for temp table doesn't allow stored procedure to be precompiled.
... But table variables doesn't allow creating nonclustered indexes... And, index creation takes much longer that procedure compilation. So this is not a choice, but a note for performance engineer.
Table variables are not affected by transactions and can't be rolled back.
In performance terms it means that potentially table variables are faster but practically I never noticed a difference.
And... temp table can be created with nice select * into #tmp from aTable syntax. This is not performance of server but performance of me ))
MS recommends using table variable since you have no noticable benefit when using temp tables. I.e. they say: try both and choose the best.
Anton Burtsev
March 6, 2011 at 5:45 pm
LutzM (3/6/2011)
The best article I came across so far: http://www.sqlservercentral.com/articles/Temporary+Tables/66720/
:blush:
Thanks Lutz!
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 6, 2011 at 5:49 pm
MysteryJimbo (3/6/2011)
the benefit of using a table variable over the IO required to create a temporary table is quite significant.
Really? You seem to be implying that there is no IO in the creation of a table variable - and thus that a table variable is memory only. You might want to see MVP Gail Shaw's blog proving that table variables are written to disk: http://sqlinthewild.co.za/index.php/2010/10/12/a-trio-of-table-variables/[/url]
Also, see Q4 in this MS KB article.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 6, 2011 at 5:57 pm
burtsev (3/6/2011)
Temp tables can be processed in parallel. Table variables are not.
Select statements on table variables CAN be processed in parallel. Update, insert and delete statements - not so.
... But table variables doesn't allow creating nonclustered indexes...
You CAN create clustered or non-clustered indexes on table variables. They have to be in the form of either Primary Key or Unique constraints, declared in the same statement as the table variable.
Table variables are not affected by transactions and can't be rolled back.
Table variables are not affected by explicit transactions. Implicit transactions (like those found on an insert statement) can and will be rolled back if the statement cannot succeed. See this excellent blog post by MVP Gail Shaw where she shows this in action: http://sqlinthewild.co.za/index.php/2010/10/12/a-trio-of-table-variables/[/url]
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 6, 2011 at 6:06 pm
Regarding the number of rows in a table variable before switching to a temporary table: I have seen issues with table variables with as few as 48 rows in the table variable. Simply switching to a temporary table took a query running in > 3 minutes to < 1 second.
I think that this is a subjective call, and must be determined by testing. One of the things that I've seen affect it is how the table variable is actually used: if you are using it in a JOIN condition, or using a WHERE clause against it, then the performance can really drop off.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
March 7, 2011 at 2:29 am
Personally I only resort to table variables during proof's and avoid them in production code, thanks to a previous boss who was a bit of a zealot on the matter.
IF I have to use a table variable in production code then I use a rule of thumb of 1000 rows or 64KB (8 pages).
However, with the introduction of CTE's in 2005, I have found less and less justification to use them, unless its to hold preliminary results.
_________________________________________________________________________
SSC Guide to Posting and Best Practices
March 7, 2011 at 6:53 am
There are VERY few scenarios where you want to consider table variables over temp tables: VERY high volume executions where temp-table-driven recompiles are causing issues, logging situations (table vars keep data after rollback), where you KNOW you ALWAYS want a nested loop join and/or index seek/bookmark lookup.
You can get HORRIBLE plans with a SINGLE-ROW table var because no stats on that single value. And when that single value is joined into a table that has 50% of the rows as one value and all the rest are onsies/twosies you are GUARANTEED to get bad plans with a table var.
Oh, and definitely ignore Joe Celko. overly complex queries from excessive use of CTE/view/derived tables carry increasing probabilities of BAD PLANS in sql server. Some of my largest performance increases for single statements come from breaking them up into smaller sets and putting interim output into temp tables. And CTEs get RECOMPUTED per execution in many cases - bad news as well for performance.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply