November 29, 2005 at 11:52 am
At what point does a table variable start impeding performance with any and all gains lost and a temporary table is a much better way? I know in our datawarehouse/OLAP environment I NEVER use a table variable, but have seen it in use by others. Just curious what everyone thinks about this.
November 30, 2005 at 4:00 am
The answer is: 'It depends'.
If your table variable gets too big to hold in memory, SQL will automatically create a temporary table as backing storage. If your SQL has no WHERE clause and therefore always does a table scan, there probably will be no impact in having a large number of rows in your table variable.
If your SQL has a WHERE clause, it could possibly get better performance by exploiting indexes. You can create indexes on a temporary table but not on a TV (apart from the PK). However, SQL will not use an index if the number of pages in the table is so low that a table scan will not harm performance. Therefore the breakpoint comes when the table grows above about 500KB, when SQL would normally determine that index access is faster than a table scan. To translate this to an approximate row count, divide your row length into 500KB. If you know you will always have less rows than this number then a TV will not harm performance, otherwise consider using a temporary table.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
December 1, 2005 at 7:51 am
Hello,
My recollection is that if the recordset is going to be large then a temp table may out perform a table variable. If it is a performance issue consider comparing each.
Best wishes,
Barry O'Connell
December 1, 2005 at 8:06 am
This is the best info I've received. In our datawarehousing environment, we are always working with result sets that are in the tens of thousands rows, sometimes in the millions. I've been examining the stored procedures of an application sold to us by a vendor and have been questioning their use of TV's when dealing with the reporting volume we have. Thanks again.
December 20, 2019 at 11:23 am
This *was* good advice back in 2005, but now on SQL Server 2012 and above, table variables can be indexed, are dynamically indexed by default and fully accessible by the query optimiser etc. We also commonly have RAM for our SQL servers in the hundreds of gigabytes, allowing virtually any size of Table Variable to be stored in RAM.
I just finished running a stored proc that loads 5.5 million rows into a table variable as part of a cursor loop, then writes all of those rows to a table on dbo, and it ran in 12 minutes vs 30 minutes with a temp table.
December 20, 2019 at 11:57 am
This *was* good advice back in 2005, but now on SQL Server 2012 and above, table variables can be indexed, are dynamically indexed by default and fully accessible by the query optimiser etc.
I'm not sure that's completely correct - what do you mean by "dynamically indexed by default", for example? The most important thing, in my opinion, is that table variables don't have statistics on them. Have a look here for some straight-from-the-horse's-mouth information.
I just finished running a stored proc that loads 5.5 million rows into a table variable as part of a cursor loop, then writes all of those rows to a table on dbo, and it ran in 12 minutes vs 30 minutes with a temp table.
Are you in a position to post some code that we can try, please?
John
December 20, 2019 at 12:12 pm
BaronVonData wrote:I just finished running a stored proc that loads 5.5 million rows into a table variable as part of a cursor loop, then writes all of those rows to a table on dbo, and it ran in 12 minutes vs 30 minutes with a temp table.
Are you in a position to post some code that we can try, please?
John
I'm also very excited to know if it's true.
An eye opener reply for me ! Is it correct that Table Variables are stored in-memory? I assumed it also refers tempdb the way temporary tables does.
December 21, 2019 at 10:55 pm
This *was* good advice back in 2005, but now on SQL Server 2012 and above, table variables can be indexed, are dynamically indexed by default and fully accessible by the query optimiser etc. We also commonly have RAM for our SQL servers in the hundreds of gigabytes, allowing virtually any size of Table Variable to be stored in RAM.
I just finished running a stored proc that loads 5.5 million rows into a table variable as part of a cursor loop, then writes all of those rows to a table on dbo, and it ran in 12 minutes vs 30 minutes with a temp table.
So you think table variables perform better than temporary tables?! I think we all need to see some code that shows this.
BaronVonData, can you provide a script that demonstrates this?
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply