February 7, 2012 at 5:24 pm
Hello all
I have a very large table ( 338782585 rows, 552 gb ) , usually it is "fast" to retrieve data from it, but I have an special case where I am creating an inner join with a temp table wich its only column is the field i use for the clustered index on my big table, that secondary table has almost 7 million records, the select retrieves around 40 fields and is taking almost two hours to complete, sometimes longer, anyone has any idea on how to improve that performance?
My server is and sql server 2008, 48gb, 16 cpu
Thanks in advance
February 7, 2012 at 5:27 pm
ricardo_chicas (2/7/2012)
Hello allI have a very large table ( 338782585 rows, 552 gb ) , usually it is "fast" to retrieve data from it, but I have an special case where I am creating an inner join with a temp table wich its only column is the field i use for the clustered index on my big table, that secondary table has almost 7 million records, the select retrieves around 40 fields and is taking almost two hours to complete, sometimes longer, anyone has any idea on how to improve that performance?
My server is and sql server 2008, 48gb, 16 cpu
Thanks in advance
Try a clustered index on the #Temp.
If that doesn't work, can you attach the .sqlplan so we can take a look at it?
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
February 8, 2012 at 8:49 am
Hi
The temp table already have a clustered index, the execution plan is attached, the result table has 5661714 rows and 1.5 gb
any advice will be really appreciated
Thanks
February 8, 2012 at 9:16 am
Honestly your select doesn't appear to be the big issue here, rather it's the insert that's being performed into the dbo.result_table where it's updating the clustered and non clustered indexes (although I'd recommend changing the clustered index on your temp table to be the FB_ID, AuditFBId to eliminate the scan)
Compare times performing the insert into a temp table with no indexes. I'd be curious to see if you saw dramatic improvements in the execution time.
February 8, 2012 at 9:22 am
I already did that, the execution time was the same...., the query is too simple, I think the problem is the size of the table and the amount of fields i need to retrieve, I will tryin a little bit with filtered indexes
February 9, 2012 at 12:10 pm
ricardo_chicas (2/8/2012)
I already did that, the execution time was the same...., the query is too simple, I think the problem is the size of the table and the amount of fields i need to retrieve, I will tryin a little bit with filtered indexes
Sorry, fell off the planet on this for a bit. You're estimating (that's not the actual plan, just the estimate) a 2.1 million row insert into the clustered index. This is most likely getting hammered on drive I/O, since it's estimating a 1.5 gigabyte insert. Add to that it really wants an exclusive table lock for something of that volume. I'm wondering how much blocking you're dealing with here as well. Your temp table is estimating at 6.6 million rows.
Cleaning up your select isn't necessarily the concern, and you can prove that to yourself by simply doing a select count(*) against the FROM clause down.
I'd have to get on the system, discuss business logic, and see what could be done to improve the process, not the query.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply