November 22, 2010 at 10:14 pm
We are using TAbleau for reporting. The tableau process is sending SQL Server a query that selects into a #temp table. There is no way that I know of to keep tableau from sending this query with teh select into.. That qeury takes 32 seconds to run, but the base query without the into #temptable onlytakes 3 secs... is there anything we could do to tune the query with the select...into #temptable?>
November 24, 2010 at 7:47 am
Temp tables are created and stored in the temdb system database, so look for bottlenecks there. Is tempdb on its own drive and not on the same drive as the user database files? (It should be, for best performance.) Is it on a fast drive? Are there a lot of other queries using tempdb at the same time your query is running? You may want to split tempdb into multiple data files to improve performance, preferably with each data file on its own hard drive spindle. Is it sized appropriately or is it having to autogrow a lot (which takes a lot of time)?
November 24, 2010 at 8:09 am
thanks for the advice.. i had already suggested those things to the client.. we are waiting for server upgrades... i was just wondering what they could do in the meantime.. well we talked to the software developer and found out why it was using temp tables and we are turning off that function.
November 25, 2010 at 12:41 am
shaun.stuart (11/24/2010)
Temp tables are created and stored in the temdb system database, so look for bottlenecks there. Is tempdb on its own drive and not on the same drive as the user database files? (It should be, for best performance.) Is it on a fast drive? Are there a lot of other queries using tempdb at the same time your query is running? You may want to split tempdb into multiple data files to improve performance, preferably with each data file on its own hard drive spindle. Is it sized appropriately or is it having to autogrow a lot (which takes a lot of time)?
I agree with the above except for one small detail. Temp Tables are first created and stored in memory just like a Table Variable and spill onto disk only when they exceed some capacity of memory.
Also, since Temp DB is in the SIMPLE recovery mode, much optimization can be achieved not only by your good suggestions, but also by employing techniques for "minimal logging" that can occur during SELECT/INTO. The requirements for "minimal logging" are available in Books Online.
Other than that, Shaun's recommendations are spot on. I'll also kick in that if you're running something like certain iSCSI cards (which, IIRC, rely on a software solution for throughput), you'll have such a problem. Those should be changed out for a nice hardware solution like some good ol' HBA's with fiberchannel, assuming you have a SAN, of course.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 25, 2010 at 2:30 am
Can we get the original query and some sample data?
Are you moving very large BLOB data around? Seen this before...
Personally I am not a fan of SELECT...INTO. You should be creating the #temp table and then doing the appropriate INSERT, although I doubt this will cause that much of a slow-down.
November 25, 2010 at 8:17 am
1) the easy answer is to drop a FusionIO card into the server and use that for tempdb.
2) I wonder if a plan guide can be created that will somehow eliminate the temp table. I don't think this is possible, but it is worth investigating.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 25, 2010 at 8:38 pm
grahamc (11/25/2010)
You [font="Arial Black"]should [/font]be creating the #temp table and then doing the appropriate INSERT
I've gotta say that, with certain exceptions, I totally disagree with that statement. Please explain why you think SELECT/INTO [font="Arial Black"]should [/font]be avoided, please. And, no... not trying to be confrontational here, Graham... there are a lot of myths and old wives tales about SELECT/INTO that I try to dispell on a regular basis. 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
December 1, 2010 at 4:21 pm
I have read that using the SELECT column list INTO #tablename syntax to create temp tables holds locks in tempdb (create object locks) for the entire duration of the query.
This can block or be blocked by other simultaneous queries that are trying to create objects in tempdb.
That is why it is better to create the temp table first and then insert into it. That way the locks are only held until the create table finishes.
December 1, 2010 at 4:42 pm
pspeagle (12/1/2010)
I have read that using the SELECT column list INTO #tablename syntax to create temp tables holds locks in tempdb (create object locks) for the entire duration of the query.This can block or be blocked by other simultaneous queries that are trying to create objects in tempdb.
That is why it is better to create the temp table first and then insert into it. That way the locks are only held until the create table finishes.
It WAS true in like SQL 6 or 6.5. Not true ever since (this myth is hard to kill).
December 2, 2010 at 10:22 pm
Ninja's_RGR'us (12/1/2010)
pspeagle (12/1/2010)
I have read that using the SELECT column list INTO #tablename syntax to create temp tables holds locks in tempdb (create object locks) for the entire duration of the query.This can block or be blocked by other simultaneous queries that are trying to create objects in tempdb.
That is why it is better to create the temp table first and then insert into it. That way the locks are only held until the create table finishes.
It WAS true in like SQL 6 or 6.5. Not true ever since (this myth is hard to kill).
Absolutely correct, Remi. And, just so people don't think we're trying to start our own myth, here's the link that proves it...
http://support.microsoft.com/kb/153441/EN-US/
The following snippet I plucked from that MS article is probably the most important of all...
NOTE: This problem does not apply to SQL Server 7.0 and later. which isn't entirely true. If you try it across linked servers (at least in 2K5), you still can lock up on of the servers until the SELECT/INTO completes.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 3, 2010 at 5:29 am
Jeff Moden (12/2/2010)
Ninja's_RGR'us (12/1/2010)
pspeagle (12/1/2010)
I have read that using the SELECT column list INTO #tablename syntax to create temp tables holds locks in tempdb (create object locks) for the entire duration of the query.This can block or be blocked by other simultaneous queries that are trying to create objects in tempdb.
That is why it is better to create the temp table first and then insert into it. That way the locks are only held until the create table finishes.
It WAS true in like SQL 6 or 6.5. Not true ever since (this myth is hard to kill).
Absolutely correct, Remi. And, just so people don't think we're trying to start our own myth, here's the link that proves it...
http://support.microsoft.com/kb/153441/EN-US/
The following snippet I plucked from that MS article is probably the most important of all...
NOTE: This problem does not apply to SQL Server 7.0 and later. which isn't entirely true. If you try it across linked servers (at least in 2K5), you still can lock up on of the servers until the SELECT/INTO completes.
Or if the linked server is pre sql 7.0 :w00t:
December 6, 2010 at 7:48 am
Thanks for all the good feed back.
Here is an update. Talking to Tableau support, they use the #temp feature when the report is using some 'context' filter or some top x feature.
I am not a tableau expert so i am not going to pretend i care about that information, the report developer turned off the context filter for reports that didn't need it -- away went the temp tables and the performance hits... for the reports using top x, i just suggested they create views on the sql server side so that tableau didn't have to do this...
December 6, 2010 at 3:50 pm
natarshia.peck (12/6/2010)
i just suggested they create views on the sql server side
Depending on how they build the view and then access it, that could make matters worse... much, much worse.
Use of the temp tables was probably not the real problem here. How the temp tables were being used probably was.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 7, 2010 at 7:42 am
Can you explain further by what you meant by the way the temp tables were being used?
The application (we have no control on how the source code works) was creating temp tables by using the select into statements,
then the application read from the temp table to produce the reports. As someone who's alway willing to learn more, can you tell me where, even in BOL, where the multiple ways to use temp tables is explained?
Also, they have not created any views yet, it was a suggestion. I would suggest if they did create the views, then they would optimize them as best as possible...we didn't go down how to fix the problem per se, they just wanted to know what was causing the problems and the quickest way to fix them. That's why I originally wanted to know if there was anything that could be done with using temp tables that could improve performance.
thanks for all the good info
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply