December 7, 2011 at 9:45 am
SqlSpider... (12/6/2011)
Hi Guys,Any one please let me know which one is better in memory utilizations in sql server 2005
Thanks in Advance....
How are you looking to "utilize" memory?
Are you wanting something to take up less memory or to keep things in memory longer so as the next time the query is executed you don't need to go to disk? Looking for clarification on "better" as it could vary in what you are looking for.
Using one over the other could cause a different execution plan. Your number of reads or writes could vary greatly depending on which one you use, for what purpose, for what query, for which result set. The best bet would be if you are unsure what to use, take both in a side by side comparison with statistics. Use the command set statistics io on, and in the query editor gui if using ssms turn on actual execution plan as well as inclusion of client statistics (two buttons in the tool bar). You can also set up a trace to see the difference with the results. Also, play around with temporary indexes on the temporary tables as well as recompile query hints both so data is cached and execution plan as well as not. Best to not do that on a production box. When you get to a production box, the statistics might differ and you may even then get a different result.
The best way to see is to test both in side by side and monitor the results.
Some good info on temp table myths going on in the thread to look through.
December 7, 2011 at 9:52 am
matt.newman (12/7/2011)
SqlSpider... (12/6/2011)
Hi Guys,Any one please let me know which one is better in memory utilizations in sql server 2005
Thanks in Advance....
How are you looking to "utilize" memory?
Are you wanting something to take up less memory or to keep things in memory longer so as the next time the query is executed you don't need to go to disk? Looking for clarification on "better" as it could vary in what you are looking for.
Using one over the other could cause a different execution plan. Your number of reads or writes could vary greatly depending on which one you use, for what purpose, for what query, for which result set. The best bet would be if you are unsure what to use, take both in a side by side comparison with statistics. Use the command set statistics io on, and in the query editor gui if using ssms turn on actual execution plan as well as inclusion of client statistics (two buttons in the tool bar). You can also set up a trace to see the difference with the results. Also, play around with temporary indexes on the temporary tables as well as recompile query hints both so data is cached and execution plan as well as not. Best to not do that on a production box. When you get to a production box, the statistics might differ and you may even then get a different result.
The best way to see is to test both in side by side and monitor the results.
Some good info on temp table myths going on in the thread to look through.
How can that possibly matter in any way?
If you put 10 000 rows in either types of table it's going to take the same amount of space. The server allocates & uses the ram, not the query.
December 7, 2011 at 10:06 am
A table variable is not always used the same way for the rest of the query. Maybe you have the 10K row count or maybe 2 row count in memory, but the rest of the execution plan built around that might make for fewer reads or more. If the answer you are looking at only takes into account the storage in cache for the object result set itself it is one thing. But if you are not consuming the result set in the most optimal way, there could be added memory use to pull data from that result set. Temp tables can ave temp indexes on them where it could help prevent a possible index spool or table spool or whichever else the execution plan thinks it needs to join data (a lot of unknown).
My reply was based on "test, side by side - here are a few things that could help see the results to determine which you want to use for your case", it does not have to do with the objects themselves.
December 7, 2011 at 10:11 am
matt.newman (12/7/2011)
A table variable is not always used the same way for the rest of the query. Maybe you have the 10K row count or maybe 2 row count in memory, but the rest of the execution plan built around that might make for fewer reads or more. If the answer you are looking at only takes into account the storage in cache for the object result set itself it is one thing. But if you are not consuming the result set in the most optimal way, there could be added memory use to pull data from that result set. Temp tables can ave temp indexes on them where it could help prevent a possible index spool or table spool or whichever else the execution plan thinks it needs to join data (a lot of unknown).My reply was based on "test, side by side - here are a few things that could help see the results to determine which you want to use for your case", it does not have to do with the objects themselves.
Ok, I guess you skipped 1-2 posts. This had been talked about already.
Anywho, @t can have indexes on them as well. No real differences from #t.
Of course the real cost is what you do with it and how you do it.
December 7, 2011 at 10:17 am
Sorry, must have missed. Lots of stuff to read through! Probably glazed over a few.
December 7, 2011 at 10:19 am
matt.newman (12/7/2011)
Sorry, must have missed. Lots of stuff to read through! Probably glazed over a few.
NP, so did I 😀
December 8, 2011 at 2:34 am
Would anybody like to outline the pros and cons of using a real table instead of a temporary table or a table variable? Surely SQL is better at managing these - isn't that "what SQL does"?
There will obviously be a problem with cross-process interference (which can be overcome with a SPID column) and blocking / deadlocking (which can be minimised with indexes, presumably) but fewer problems arising from recompilations and tempdb-related bottlenecks, won't there?
December 8, 2011 at 3:04 am
julian.fletcher (12/8/2011)
Would anybody like to outline the pros and cons of using a real table instead of a temporary table or a table variable? Surely SQL is better at managing these - isn't that "what SQL does"?
Temp tables and table variables are real tables, in the TempDB database. There are few benefits to a real, permanent table and a lot of disadvantages (more logging, concurrency, higher IO impact if in a normal DB, permissions, locking, etc)
Temp tables don't always cause recompiles any longer, whereas creating a permanent table will always require recompiles. TempDB bottlenecks can usually be fixed with more files and/or more drives and, in worst cases traceflags.
If a user database is subject to high frequencies of table drop/create it can get all the same bottlenecks without the optimisations that TempDB has.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2011 at 4:01 am
Gail,
Thanks for your comments. When I said "real table", I meant a permanent one, not one created as and when required in the way that you would with a temporary table (or a table variable).
If we can't avoid tempdb bottlenecks by getting more files and / or drives (because we don't own or have any say about the servers), doesn't this tip the balance in favour of permanent real tables? Or is there a better alternative? (Fingers crossed at this point!)
December 8, 2011 at 4:27 am
julian.fletcher (12/8/2011)
Gail,Thanks for your comments. When I said "real table", I meant a permanent one, not one created as and when required in the way that you would with a temporary table (or a table variable).
If we can't avoid tempdb bottlenecks by getting more files and / or drives (because we don't own or have any say about the servers), doesn't this tip the balance in favour of permanent real tables? Or is there a better alternative? (Fingers crossed at this point!)
Permanent tables avoid tempdb issues (largely) associated with the various kinds of temporary tables so if you have major tempdb issues, that might be a way to resolve them if you can't bring the standard solutions to the table. But, as Gail pointed out, they come with all their own set of issues around locking and other stuff. You also have to take into account a whole different way of developing. Loading a temp table, it's mine. I don't have to put some sort of identifier in it to know which data is mine, it's all mine. With a permanent table that you & I share, we have to mark my data seperate from yours. This means additional indexing (possibly) in order to make the searches for my data or your data faster. With a temp table, I can close my transaction and let the processes clean up the data later. With a permanent table, I have to make clean up a part of some explicit transaction at some point.
There are just tons & tons of issues that are going to come up around the use of a permanent table for temporary data. That's why the temporary constructs were created.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 8, 2011 at 4:50 am
julian.fletcher (12/8/2011)
If we can't avoid tempdb bottlenecks by getting more files and / or drives (because we don't own or have any say about the servers), doesn't this tip the balance in favour of permanent real tables?
In the sense of 'nothing else can be done, this is the last resort', kinda... There's all sorts of problems that you'll run into that temp tables don't have. I worked for a company that had a coding standard of 'use permanent tables to avoid the overhead of temp tables'. Wasn't pretty and we spent so much time fixing stuff that just wouldn't have been an issue if we'd been using temp tables.
Or is there a better alternative? (Fingers crossed at this point!)
Optimise code, reduce temp table usage (make sure it's absolutely necessary)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2011 at 4:52 am
Grant Fritchey (12/8/2011)
With a permanent table, I have to make clean up a part of some explicit transaction at some point.
And have a process to handle cases where due to bad code or errors the data didn't get cleaned up (and if you're using session_id as the identifier, that can be huge fun)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2011 at 5:04 am
GilaMonster (12/8/2011)
Grant Fritchey (12/8/2011)
With a permanent table, I have to make clean up a part of some explicit transaction at some point.And have a process to handle cases where due to bad code or errors the data didn't get cleaned up (and if you're using session_id as the identifier, that can be huge fun)
Oh yeah. I was sitting here thinking about Agent jobs I'd created in the past for just this sort of thing <shudder>
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 8, 2011 at 5:20 am
The reason I'm interested is that I'm working on a project which was initially written in SQL 2000. Then, our use of temporary tables seemed to be causing a lot of recompilation problems, so we changed some to "permanent tables with a SPID column" which were used by processes in this way:
1) Delete any records where SPID = @@SPID.
2) Fill (including @@SPID).
3) Use (limiting to SPID = @@SPID).
4) Delete any records where SPID = @@SPID.
But of course, we now get some deadlocking between concurrent processes. I did recently run a test in which I reverted many of these permanent tables back to temporary, but I then got massive bottleneck problems in tempdb. (Our application is hosted by customers with their own SQL servers over which we have little or no control other than the settings for "our" database, and sometimes not even those! Asking for more disks or alterations to tempdb is usually futile.) However, spurred on by your comments, I'll resurrect the modified code / database and give it another go.
December 8, 2011 at 2:48 pm
When you reference a table variable in a stored procedure, SQL Server operates on the assumption that it will contain at most a small number of rows, and will generally re-use a cached exection plan. This is good for a stored procedure which is called frequently and actually does only make liteweight use of the table variable.
However, referencing a temporary table will generally result in more statement recompiles as SQL Server attempts to optimize the plan piece by piece based on how many rows the table is actually loaded with. This works out better for stored procedures that work with heavy duty rowsets, like reporting or ETL for example.
Table variables are automatically deallocated at the end of the batch, but temporary tables stick around for the duration of the session. If I have script (not a stored procedure) that I'm using for reporting purposes or large data loads, then having that temporary table stay allocated is an advantage, because if the batch aborts at some point, I can fix whatever is broken and then resume execution of the script at that point. Of course that only relevent for ad-hoc stuff in development and QA.
Really, whatever it is you're planning to do, test it both ways with a temp table and table variable. Use profiler to guage it's performance and memory usage.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 15 posts - 46 through 59 (of 59 total)
You must be logged in to reply to this topic. Login to reply