June 24, 2011 at 8:53 am
Back with SQL Server 2000, I would frequently leverage temporary tables and table variables for containing a small intermediate result which would then get joined with another query. However, since 2005 I've used common table expressions or indexed views to achieve the same purpose. I wouldn't use temporary tables for any procedure that gets called frequently.
There are still occasions where I dump large resultsets into temporary tables, but it's for reporting type procedure calls that run for several minutes and are called perhaps once or a few times per month. Not exactly something I care about persisting in the exec plan cache or optimizing down to the millisecond level.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
June 24, 2011 at 9:01 am
sqldba_icon (6/24/2011)Eric not sure what do u mean by that question but that is not under my control. There could be a way to reduce the number of records but there are bigger problems like the one mentioned in this topic. thanks
What I meant was: why dump the resultset into a temporary table in the first place?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
June 24, 2011 at 9:24 am
Jeff Moden (6/24/2011)
Craig Farrell (6/23/2011)
This conversation resparked a curiousity on my part.When I run that code as a single unit, it keeps returning the following error...
There is already an object named '#PreBuild' in the database.
Not sure if this thread could have gotten any more derailed...maybe you should try using the Query Analyzer π
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
June 24, 2011 at 10:22 am
THE-FHA (6/24/2011)
Ever thought of the Query Analyzer?the first option is best.
What on Earth are you talking about? First Query Analyzer is for 2000 and second, I meant running running all of the code at once in SSMS.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 10:23 am
opc.three (6/24/2011)
Jeff Moden (6/24/2011)
Craig Farrell (6/23/2011)
This conversation resparked a curiousity on my part.When I run that code as a single unit, it keeps returning the following error...
There is already an object named '#PreBuild' in the database.
Not sure if this thread could have gotten any more derailed...maybe you should try using the Query Analyzer π
Heh... yeah... I'm sure that's the fix. π
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 10:25 am
THE-FHA (6/24/2011)
SQL 2008 come with query execution plan and estimation. try testing this queries and see the estimated results.
Nah... the execution plan frequently lies for things having to do with performance.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 10:28 am
I wasn't going to post until I got home from work to give some demo's (I don't have time while at work) but one of the things to consider (that I've not seen any mention of on this thread) when you add indexes for large Temp Tables is "Minimal Logging'. π
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 11:01 am
Jeff Moden (6/24/2011)
I wasn't going to post until I got home from work to give some demo's (I don't have time while at work) but one of the things to consider (that I've not seen any mention of on this thread) when you add indexes for large Temp Tables is "Minimal Logging'. π
In addition to comparing the difference between CREATE/INSERT/SELECT versus SELECT/INTO, table variables may result in less recompiles and less transaction logging, so it may be worth comparing temp table versus table variable.
It's also possible to create primary keys and indexes on a table variable.
declare @t table
(OrderID int primary key,
RequiredDate datetime not null,
ShippedDate datetime null,
unique clustered (RequiredDate, OrderID));
A CTE may work better than using wither a temp table or table variable, but I don't know how the tables are being used or why.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
June 24, 2011 at 11:18 am
Jeff Moden (6/24/2011)
I wasn't going to post until I got home from work to give some demo's (I don't have time while at work) but one of the things to consider (that I've not seen any mention of on this thread) when you add indexes for large Temp Tables is "Minimal Logging'. π
Nice. I am hearing that for the SELECT...INTO method you get minimal logging on the initial table build...plus when issuing the ancillary CREATE INDEX statements you also get minimal logging.
The connect item I posted shows how to get that same effect as of SQL 2008 using INSERT...SELECT with a TABLOCK hint on the target table. We know tempdb is in simple recovery mode, but I have not tried this technique on a true # table. See section "Using INSERT INTOβ¦SELECT to Bulk Import Data with Minimal Logging" in INSERT (T-SQL) article in BOL. I will give some of this a try later.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
June 24, 2011 at 11:22 am
Eric M Russell (6/24/2011)
Jeff Moden (6/24/2011)
I wasn't going to post until I got home from work to give some demo's (I don't have time while at work) but one of the things to consider (that I've not seen any mention of on this thread) when you add indexes for large Temp Tables is "Minimal Logging'. πIn addition to comparing the difference between CREATE/INSERT/SELECT versus SELECT/INTO, table variables may result in less recompiles and less transaction logging, so it may be worth comparing temp table versus table variable.
It's also possible to create primary keys and indexes on a table variable.
declare @t table
(OrderID int primary key,
RequiredDate datetime not null,
ShippedDate datetime null,
unique clustered (RequiredDate, OrderID));
A CTE may work better than using wither a temp table or table variable, but I don't know how the tables are being used or why.
Be careful with table variables. They always optimize with an assumed rowcount of 1 so can produce very bad execution plans...even with small actual rowcounts.
I avoid them in all cases except for true temporary scratch tables, like for logging info during a proc execution (i.e. to only write info, no joins to other tables). Then I issue an insert...select to store log results to concrete log table outside tran or select * to display log results. The property that table variables do not participate in transactions comes in extra handy for these uses. They are not in any way a replacement for # temp tables IMHO.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
June 24, 2011 at 12:13 pm
TheSQLGuru (6/23/2011)
tfifield (6/23/2011)
. . . I always create the index after the SELECT INTO as Kevin suggested, since the create time is better. If the temp table is large, then I use the 100% fill factor and it seems to help performance in most cases.. . .
Can I inquire about what percentage of the time you use a temp table that you actually do create one or more indexes on it? And do you routinely examine the query plans and performance implications to see if said index(es) actually improve performance? Again I have seen VERY few real-world cases where they were appropriate.
Kevin,
I only bother with it about 10% of the time. Mostly on temp tables more than just a few hundred records. I always bench mark both with clean data buffers. Out of that 10% that I bother with, I'd guess that 50% of the time I come up with something that is significantly faster - usually a MERGE instead of HASH on the JOIN on the plan.
Most of the time the overhead in creating an index is more that I get back in performance in the rest of the procedure, which is why I only bother with it 10% of the time.
I always create a clustered index on the temp table if I'm doing a quirky update type running total.
Todd Fifield
June 24, 2011 at 12:21 pm
Craig Farrell (6/23/2011)
This conversation resparked a curiousity on my part.The following was tested in a 2k8 R1 environment off my local drives. End result, and if you read the notes you'll see why: No indexing, SELECT INTO. If indexing, fully prebuild it. Test yourself of course, but that's now my rule of thumb.
Craig,
Excellent bench mark. It gives me some food for thought on how I do things.
Todd Fifield
June 24, 2011 at 12:23 pm
Jeff Moden (6/24/2011)
Craig Farrell (6/23/2011)
This conversation resparked a curiousity on my part.When I run that code as a single unit, it keeps returning the following error...
There is already an object named '#PreBuild' in the database.
Yeah, that's the compiler whining because of multiple CREATE TABLE statements (SELECT INTO will end up with the same problem eventually). It wasn't meant as an end to end script but to run it with highlight F5 for each section between the large comment areas.
If I get a chance today I'll see if I can make it end to end runnable.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
June 24, 2011 at 3:14 pm
Craig Farrell (6/24/2011)
Jeff Moden (6/24/2011)
Craig Farrell (6/23/2011)
This conversation resparked a curiousity on my part.When I run that code as a single unit, it keeps returning the following error...
There is already an object named '#PreBuild' in the database.
Yeah, that's the compiler whining because of multiple CREATE TABLE statements (SELECT INTO will end up with the same problem eventually). It wasn't meant as an end to end script but to run it with highlight F5 for each section between the large comment areas.
If I get a chance today I'll see if I can make it end to end runnable.
Nah... not to worry now that I know. Thanks anyway, Craig.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2011 at 3:17 pm
opc.three (6/24/2011)
Be careful with table variables. They always optimize with an assumed rowcount of 1 so can produce very bad execution plans...even with small actual rowcounts.
A statement level OPTION(RECOMPILE) will fix that a lot of times but I stillavoid Table Variables because there's still zero chance of statistics be created and they don't persist in SSMS like Temp Tables do which makes troubleshooting a whole lot easier.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 31 through 45 (of 59 total)
You must be logged in to reply to this topic. Login to reply