September 21, 2010 at 9:44 am
Well - in this case, at a rough guess, I'd try getting rid of the temp tables and refactor the query to use derived tables instead. This way, you're using whichever indexes are already on the source tables rather than having to suck the data off into large temp tables thus avoiding the overhead of building the temp tables in the first place (I suspect, at a guess from your info so far they're going to be large and end up on disc) then building the indexes on top.
But then again, while temp tables are heavily overused from what I've seen over the years - sometimes they are the best way of going about things.
Impossible to say given the lack of information. However, building a clustered index then loading a table can be *very* expensive, specifically if you're filling the table with rows that are not in the same order as the clustered index and you're getting page splits all over the shop dudring the load.
THere are scores of other issues that could be taken into consideration - but it's all pure guesswork without hard facts to go on.
All together now ... "It depends!"
September 21, 2010 at 10:25 am
Query tuning depends a lot on your database structure and the types of queries you're using. If you want generic advice, check out Books Online and pretty much every book published on the subject. There is a lot to it that is hard to put in a single thread.
If you want specific advice, advice relevant to your query, you need to give us more detail. We can't assist you without knowing more details.
September 22, 2010 at 5:13 pm
Thank you All
I found a really good book about best practices and such
With my sproc i just ran a query for missing index, modified on index to add some included columns and reduce the amount of temp files and now the performance is much better
September 22, 2010 at 9:37 pm
I routinely see clients put data into a temp table, index that temp table, then join that temp table to other things WITH NO FILTER and ONLY ONCE, which means the overhead of creating the index was completely wasted. Engine will do a scan/hash or something similar on that every time and it will be most efficient in almost every case.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 23, 2010 at 3:31 am
ricardo_chicas (9/22/2010)
Thank you AllI found a really good book about best practices and such
With my sproc i just ran a query for missing index, modified on index to add some included columns and reduce the amount of temp files and now the performance is much better
Hi can you give the author and title of the book :hehe:
MCITP: Database Administrator 2005
MCTS SQL Server 2008
MCP SQL 2012/2014
MCSA SQL Server 2012/2014
MCSE Data Management and Analytics
September 23, 2010 at 8:55 am
The book is "SQL Tuning" Dan Tow
September 23, 2010 at 1:39 pm
SELECT INTO runs much faster than CREATE TABLE/INSERT INTO in most cases - especially if the created table has constraints or indexes, which will have to be maintained during the actual insert. I've found dumping the data I want into a temp table with SELECT INTO and then creating indexes is usually much faster. This is most likely some of what you are seeing in the increase.
However, I suspect that there might be something else as well. Going from 40 or 50 minutes to 4 hours is a huge increase.
I agree with the other posters that you should find out how long each step is taking - it might very well be one of them that is the culprit.
Another thing to look at the the sheer number of records you are putting into your temp tables in order to return a fraction of that number. Is there some way to determine only which records you want and which ones you don't want during the creation of the temp tables?
Todd Fifield
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply