March 31, 2009 at 8:18 am
Jack Corbett (3/31/2009)
Grant,I'm reading it as 600 million rows (627,966,462) from the Full Text Engine.
I'd also be interested in the execution plan from the Coalesce function query.
You might also get better performance if you used a temporary table to hold the results of the function and join on that.
I know virtually nothing about Full Text, but could you change the function so that you pass the Product ID to the function as well and then use CROSS APPLY? This would limit the results returned by the function and may improve your performance. Again, I've never used Full Text so I don't event know if that would be possible.
Oops. But hey, what's a couple of zero's among friends.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 31, 2009 at 8:19 am
Grant Fritchey (3/31/2009)
Yeah, that's exactly what I was thinking. It would prevent the use of the index. That's a concern usually, not a goal. That's why I'm a bit confused.
I assume that query performance tuning programs try every combination of join to see which is the most efficient. I have never seen one of these programs work but I know of some places that use them. Some seem to produce t-sql specific code, ie the SPs etc are full of INNER MERGE JOIN etc. Others seem to produce generic code, which will also work on Oracle. Here the SPs etc are full of COALESCE(joinColumn, joinColumn). I have always assumed that COALESCE works by forcing the optimizer to ignore that join and spend the limited time available on other joins.
March 31, 2009 at 11:44 am
Ken McKelvey (3/31/2009)
I assume that query performance tuning programs try every combination of join to see which is the most efficient. I have never seen one of these programs work but I know of some places that use them. Some seem to produce t-sql specific code, ie the SPs etc are full of INNER MERGE JOIN etc. Others seem to produce generic code, which will also work on Oracle. Here the SPs etc are full of COALESCE(joinColumn, joinColumn). I have always assumed that COALESCE works by forcing the optimizer to ignore that join and spend the limited time available on other joins.
Interesting. That might be a good idea for a blog post or an article, a little testing along these lines. I'll have to do some research. Maybe this is a well known tip and I'm just ignorant (wouldn't be the first time).
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 31, 2009 at 11:50 am
That is really interesting. The plans themselves are almost identical, but the coaelesce is reducing the amount of data being accessed. Instead of 600 million rows, it's only 8 million. Instead of 33,000, it's 600. Do you have a lot of null values on that field? That might explain it.
The other plan is even more interesting. Without the ORDER BY, just getting any random top 10 rows, it's not dealing with the millions of rows at all.
I think Jack's idea is worth following up. Can you try gathering the data from the text search into a temporary table first.
BTW, if you just run the text search, all by itself, how many rows does it return? 600 million?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 31, 2009 at 12:20 pm
Running the following
SELECT COUNT(*) FROM [dbo].[CAT_fctSearchProducts](' FORMSOF (INFLECTIONAL, tube) ')
returns 12,443 rows.
None of the columns used in the query are nullable, the only way a null could be introduced is if the CAT_fctSearchProducts function fails to match any rows, in which case there would be no join.
I'm wondering if the query optimiser is seeing that there is an index on the ORDER BY columns and deciding to return all rows from the CAT_fctSearchProducts (i.e. the entire full-text index without any filtering) first and then apply the fitering once it's been joined to the index?
Does that sound feasible?
April 1, 2009 at 9:58 pm
Hi,
When you introduce the ORDER BY, the query optimizer (QO) picks a bad plan. This often happens with TVFs, queries from remote servers, XQuery expressions and so on because the QO does not have information about the number of rows or distribution of values (any statistics in other words) on the output of the function. The default guess is 10,000 rows.
It calculates that a good-enough plan involves reading the output of the TVF into a lazy spool (effectively a temporary table) and loop joining from the product table on the product ID. Clearly wrong, but you have to have some sympathy for the QO with no stats to go on.
From the actual plan (FullQuery.sqlplan) you can see that the plan is a poor solution because of the 676 rewinds on the lazy spool, feeding 8.4M rows into the loop join.
The optimal plan is clearly closer to that for the no ORDER BY case. We need some way to tell the optimizer to either:
1. Start by running the full text query, and use that to loop join to the product table; or
2. To just read both inputs once and join without a spool.
The first option involves forcing the order of the join and specifying loop. You can do this using INNER LOOP JOIN or by specfying OPTION (LOOP JOIN, FORCE ORDER). In either case you must reference the FT table first.
The second option involves specifying a HASH or MERGE join. HASH should work better here because the FT output is not known to be sorted, even if one access path on Product is. So try:
SELECT TOP (10) *
FROM [dbo].[CAT_Product] AS [t0]
JOIN [dbo].[CAT_fctSearchProducts](' FORMSOF (INFLECTIONAL, tube) ') AS [t2] ON [t0].[ProductID] = [t2].[ProductID]
WHERE ([t0].[ProductType] = 1) AND ([t0].[National] = 1)
ORDER BY [t0].[ProductType], [t0].[NPC]
OPTION (HASH JOIN)
There are many other ways to steer the optimizer in the right direction. Putting the results of the FT query in a temporary table (as Jack Corbett suggested) with an index on product id would also do the trick because the QO would then have good statistics.
This would also re-enable parallelism for larger queries of this sort (if you have multiple cores and MAXDOP > 1) - using a TVF means a parallel plan can never be generated. Whether that is a consideration, I don't know.
Given that the FT query will only produce ~12K rows, and the output from the product table is only 677 rows once the seek on Product Type is done, the HASH query should run very smartly indeed.
Play about with the query based on the above, ensuring that the Table Spool goes away and the FT query is scanned once, and first.
If you have any more questions or problems with this, please provide an estimated and actual query plan - thanks for doing that before, it really helped!
Cheers,
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
April 2, 2009 at 11:55 am
Wow, what a fantastic post! That's a really clear explanation of what's happening and will help me out a lot. I may have somewhat of a problem in that I can't easily change the SQL but I guess there's no other way around it.
April 2, 2009 at 1:49 pm
Hi James,
Thanks! 🙂
If the SQL is difficult to change (maybe because it is generated by a third-party application or whatever), you may still be able to produce a better outcome through the use of a template plan guide - see sp_create_plan_guide in Books Online. One possible angle of attack would be to add a OPTION (HASH JOIN) hint to the parameterized form of the query - anyway you'll get the idea if you check out that BOL article.
Good luck!
/Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply