December 16, 2010 at 11:20 am
WayneS (12/16/2010)
I'm sorry to all about messing this up. However, I still hope that you have learned from this QotD!
Well, I did learn from this question, so mission accomplished!
And one thing I forgot to mention in my previous post:
extra kudos for the nice lay-out of your SQL code. 🙂 I appreciate it when people put effort in making their code readable.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
December 16, 2010 at 11:25 am
WayneS (12/16/2010)
Okay, I messed up. I posted the wrong code (the posted code was run to get the error message for the QotD). Then I went and gave the explanation based on that wrong code... 🙁 Steve, please give credit to all that selected the error message as the answer.I'm sorry to all about messing this up. However, I still hope that you have learned from this QotD!
Wayne...
Thanks again for the explination. We have all had those days. I know I wish there was a way to look at the submited question after the publication notification goes out.
I had actually remembered reading about this in one of your many good articles posted at this site and was shocked at how this QOTD contradicted itself, and some of your previous articles.
I thought maybe Lord Grevious Error had hacked your account and posted this QOTD.
December 16, 2010 at 4:43 pm
It's nice that we not only learn but get credit at the same time.
Jamie
December 16, 2010 at 6:53 pm
I saw this after Steve had modified it. I was very confused by the "correct anser" answer, since I thought the option immediately before it was the correct answer. But I chickened out and chose "correct answer".
Tom
December 17, 2010 at 1:16 am
Thanks for the explanation Wayne.
Learned something new and another place to look if results don't meet expectations 😉
-------------------------------------------------------------------------
Normal chaos will be resumed as soon as possible. :crazy:
December 17, 2010 at 2:34 pm
Thanks for the question, though it had been edited by the time I got to it.
December 18, 2010 at 6:20 am
I am slightly confused as to what is meant by a 'simply named' temporary table in the preceding discussion.
Strikes me this is a great reason to use table variables instead of #temp tables in procedures.
Excellent question, Wayne. I don't care about the stuff-up with the code, great work.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
December 18, 2010 at 3:03 pm
SQLkiwi (12/18/2010)
Strikes me this is a great reason to use table variables instead of #temp tables in procedures.
As long as you understand the consequences and limitations, and the optimizer can create decent plans with the row estimate it will use for the table variables. (It will always assume that it contains 1 row if I recall correctly.)
December 18, 2010 at 8:06 pm
UMG Developer (12/18/2010)
As long as you understand the consequences and limitations...
I like to think I have a fairly good grasp, yes.
...and the optimizer can create decent plans with the row estimate it will use for the table variables. (It will always assume that it contains 1 row if I recall correctly.)
It will estimate one row unless WITH (RECOMPILE) is specified, or there is a statement-level recompile for any of the usual reasons; in either case table cardinality will be available.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
December 20, 2010 at 5:41 am
=====================================
create table #test2 (rowid int)
insert into #test2
select top 10 row_number () over (order by name), name
from master.sys.all_columns
======================================
The above code will fail because in the 'Table Definition' I have provided 1 column, where as I am trying to insert 2 columns, which is inappropriate.
Can anybody, please help me understand where the 'TWEAK' is? I couldn't catch it !! 😉
Thanks.
December 20, 2010 at 8:09 am
SQLkiwi (12/18/2010)
I am slightly confused as to what is meant by a 'simply named' temporary table in the preceding discussion.Strikes me this is a great reason to use table variables instead of #temp tables in procedures.
Excellent question, Wayne. I don't care about the stuff-up with the code, great work.
All I meant is that if you named your temp tables with names like #temp you get what you deserve.
Name them with a bit more clarity such as #proc1_temp1 (or with even better names) and you're fine.
And you can still create indexes on them if necessary, as well as get statistics on them. 🙂
Until I got to the point of passing tables as parameters into a stored proc, I had given up on table variables as basically useless. Now I have a single use for them, but prefer the greater flexibility of temp tables,
December 20, 2010 at 9:11 am
mtassin (12/20/2010)
All I meant is that if you named your temp tables with names like #temp you get what you deserve. Name them with a bit more clarity such as #proc1_temp1 (or with even better names) and you're fine.
I see. When you say 'a bit more clarity', does that mean 'a bit more uniqueness'? Presumably as a way to avoid collisions? That may reduce the risk, if everyone sticks to the plan, but there are no guarantees. With a table variable, the issue does not arise.
And you can still create indexes on them if necessary, as well as get statistics on them.
This is true (although table variables can also have a primary key and unique non-clustered indexes, albeit without statistical information). On the other hand, the ability to create extra indexes and statistics can be a double-edged sword. Statistics may need to be maintained, changes in table cardinality and schema may force unnecessary recompilations, and so on.
Performing DDL after table creation also prevents SQL Server using some of the caching optimizations for temporary structures present in 2005 onward. Specifically, SQL Server is often able to cache a single data page and an IAM page with the query plan, avoiding the overhead of creation and allocation each time the procedure is executed. Performing DDL after creation defeats this optimization. One cannot perform DDL on a table variable after creation, so again, the issue does not arise.
Until I got to the point of passing tables as parameters into a stored proc, I had given up on table variables as basically useless. Now I have a single use for them, but prefer the greater flexibility of temp tables.
Both have their strengths and weaknesses, and the more rounded approach is to use each where it is suited. Table variables have many advantages aside from passing TVPs around:
1. Named constraints are problematic with #temp tables (another name-collision problem). You cannot name a constraint on a table variable, so again the problem is avoided in all cases.
2. Table variables can use user-defined data types and XML schemas defined in the context database. Temporary tables cannot use either, unless they happen to have identical definitions in tempdb, which is inconvenient, and tough to maintain robustly.
3. Temporary tables inherit the collation of tempdb, whereas table variables use the context database. It is not all that uncommon for user databases to differ from tempdb in collation, and I do not enjoy resolving collation conflicts.
4. Data stored in a table variable in the context of a user transaction is not rolled back with the transaction. This can be invaluable, e.g. where we need to log information after a roll back.
5. Table variables are the only available choice in function definitions.
So, I am absolutely not saying that table variables should replace temporary tables every time. I personally prefer to start with a table variable design, and look for reasons that would justify changing to use a temporary table.
In practice, I often find that the presence of a temporary object (of sufficient size or complexity to make a table variable a poor choice) an indicator that I am doing something dumb. Specifically, manipulating large amounts of data, creating indexes, relying on statistics to produce a non-trivial plan...all this is work that is performed again and again, on every execution. Often, it indicates that the present overall database design is lacking.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
December 20, 2010 at 9:28 am
SQLkiwi (12/20/2010)
In practice, I often find that the presence of a temporary object (of sufficient size or complexity to make a table variable a poor choice) an indicator that I am doing something dumb. Specifically, manipulating large amounts of data, creating indexes, relying on statistics to produce a non-trivial plan...all this is work that is performed again and again, on every execution. Often, it indicates that the present overall database design is lacking.Paul
Granted,
Sometimes we're stuck in the land where we can't make the necessary database structural changes because we're stuck with legacy code that doesn't have an abstracted database access layer (i.e. one that has all the SQL calls as stored procs and not embedded SQL) and we find ourselves constrained in ways that we're stuck with it that way, and temp tables are the only viable work around.
Other times it's just part of a massive ETL process, and we need to build working tables to eventually manipulate the data into a way to make the final tables what we want, with some wonky intermediate steps.
I personally don't care for table variables, probably due to the all the rumors that developers around me were given. Things like how table variables are memory only and temp tables always go to disk. Those sorts of things have turned me off to using them as my go to tool, and instead I tend to use temp tables first, and use table variables when I need them for specific use.
In this case, the end result is very likely "it depends" and/or "to each his own". Provided we're both willing to go to the other tool when necessary, I don't think it's that important. 🙂
December 20, 2010 at 9:32 am
Yes. 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
December 20, 2010 at 1:55 pm
SQLkiwi (12/20/2010)
This is true (although table variables can also have a primary key and unique non-clustered indexes, albeit without statistical information).
Paul,
I know you know this - this is just be ensure that anyone else reading this is aware.
Just like any other table, a table variable can have one (and only one) clustered index on it. Paul implies that the UNIQUE index on table variables has to be non-clustered; actually one of them can be the clustered index on that table.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
Viewing 15 posts - 46 through 60 (of 64 total)
You must be logged in to reply to this topic. Login to reply