June 17, 2011 at 7:40 am
You don't need this predicate - b.UserString6 is not null - it's covered by the other one.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 17, 2011 at 7:44 am
GilaMonster (6/17/2011)
You don't need this predicate - b.UserString6 is not null - it's covered by the other one.
I had ran the query without this predicate too, but no change in the performance.
June 17, 2011 at 8:28 am
Just looked at the last execution plan. Even though your index seeks limit the number of rows from your two tables, you are still joining almost 400,000 rows to a million rows.
What are the schema for the two indexes? Could we see some I/O stats please?
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
June 17, 2011 at 8:36 am
The Dixie Flatline (6/17/2011)
Just looked at the last execution plan. Even though your index seeks limit the number of rows from your two tables, you are still joining almost 400,000 rows to a million rows.
With that kind of row count, hash match is just about the best join type there is.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 17, 2011 at 8:49 am
No argument, Gail. They don't have the index structure in place for a merge join.
I'm just saying that even a hash match can only run so fast.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
June 17, 2011 at 12:05 pm
GilaMonster (6/17/2011)
The Dixie Flatline (6/17/2011)
Just looked at the last execution plan. Even though your index seeks limit the number of rows from your two tables, you are still joining almost 400,000 rows to a million rows.With that kind of row count, hash match is just about the best join type there is.
So do you see anything else that can be done here.seek also is taking time because it has to retrive many rows.Any other option that you can suggest from your experience?
June 17, 2011 at 12:49 pm
Did you try adding indexes on the predicate columns?
June 17, 2011 at 10:56 pm
Michael Valentine Jones (6/17/2011)
Did you try adding indexes on the predicate columns?
Both the tables show seek operation on their indexes.Do you think anything would be missing? I thought seek is the most efficient one. Please clarify if you are trying to imply something which I could not understand.
June 18, 2011 at 11:16 am
chandan_jha18 (6/17/2011)
Michael Valentine Jones (6/17/2011)
Did you try adding indexes on the predicate columns?Both the tables show seek operation on their indexes.Do you think anything would be missing? I thought seek is the most efficient one. Please clarify if you are trying to imply something which I could not understand.
Since the DDL that you posted did not include any index definitions, I was asking if you had put indexes on the predicate columns.
June 19, 2011 at 8:53 pm
seek also is taking time because it has to retrive many rows
You've just stated the heart of the problem.
A seek can just mean that the query plan is skipping to some point in the table and then scanning from there. For example, if you had a clustered index on a table of 10 million names, and 10% of the names were null, you could ask only for those names which were not null. The query could use the index to skip the million rows which were null, but still have to read 9 million rows!! In other words the seek would only save about 10% of the I/O that a table scan would have required. (If you asked for only those names that started with a "Q", the query might be able to skip to the Q's and stop at the first "R", probably saving 99% of the work.) There is nothing magical about index seeks. In your instance, while they save some work, the query still has to retrieve and join 400,000 rows to a million rows.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
June 20, 2011 at 12:59 am
Thanks to everyone who spent their valuable time.I think due to the join between so many rows, we cannot get this work faster anymore. The where condition is not too selective to make the query go faster but since the business demands so many rows to be fetched together, we cannot use any other condition to filter so many rows again.
Thanks everyone. On a side note, if someone still wants to try making some changes to the index, I am including index definitions for the two tables specified.
June 20, 2011 at 8:17 am
Where did you include the index schema?
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
June 20, 2011 at 12:57 pm
The Dixie Flatline (6/20/2011)
Where did you include the index schema?
Sorry Sir. I think I forgot to attach the index schema.
I will surely attach it as soon as I reach the office in the morning. Its 1 am here and somehow my VPN is not connecting from home.
June 20, 2011 at 12:59 pm
No rush, but thank you for replying.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
June 21, 2011 at 12:10 am
The Dixie Flatline (6/20/2011)
Where did you include the index schema?
I am attaching the index schema for both the tables. Please let me know your opinion.
Viewing 15 posts - 16 through 29 (of 29 total)
You must be logged in to reply to this topic. Login to reply