April 6, 2010 at 8:19 am
Here is the background information on the issue:
We have two tables which are the Primary table and the detail table of our business. The header has 95,000,0000 rows while the detail has 360,000,000 rows of data. We have tried every little detail of working around with code to quicken the join between the tables. Including filtering data to the lowest level of requirement using where clauses.
The data starts to print after 20 seconds but the complete query gets executed only after 90 seconds
Some of the ways we have worked around are:
Importing the Primary keys of the two tables into two temporary tables upon filtering data on their date ranges. Creating an on the temporary table. T
Then inner joining the same temporary tables with the actual tables to filter the data.
It still takes about 1.30 secs to return 90,000 rows of data.
We have another parallel system which we are comparing this system with the parallel system has half the data but the same query runs in 5 seconds.
Can some suggest an alternatives to joining tables with such huge data. I have also posted the execution plan for the join between the two tables.
April 6, 2010 at 8:32 am
Can you cluster the tables?
It's a great way to partition your data into the equivalent of many smaller tables while keeping the advantage of having just one table.
April 6, 2010 at 12:41 pm
Judging from the picture of the query plan, it seems likely there are many ways to improve the performance...but I can't give any more detail without more detail from you. Can you post sample table definitions, and an explanation of what the join must achieve? Not asking for actual table definitions or real data here - just something that represents the problem space well.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply