May 25, 2012 at 2:27 pm
So it isn't necessarily same algorithm used by the Query Optimizer, correct?
I told the optimizer alogorithm to calculate the cost based on IO and cpu cost..
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:27 pm
Gullimeel (5/25/2012)
In attached screenshot if you will see that the IO cost estimation is 185.2 and CPU estimation cost is around 4.8. Then total subtree cost should have been around 190.15 but it is showing 0.38708 which is 1/500th of total cost of estimated cpu and IO cost. Why is it happening?It is happening for other operators as well?
The costs are modified by the row goal associated with the TOP operator in the plan.
http://sqlblog.com/blogs/paul_white/archive/2010/08/18/inside-the-optimiser-row-goals-in-depth.aspx
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 2:28 pm
Gullimeel (5/25/2012)
So it isn't necessarily same algorithm used by the Query Optimizer, correct?
I told the optimizer alogorithm to calculate the cost based on IO and cpu cost..
How?
May 25, 2012 at 2:32 pm
Lynn Pettis (5/25/2012)
Gullimeel (5/25/2012)
So it isn't necessarily same algorithm used by the Query Optimizer, correct?
I told the optimizer alogorithm to calculate the cost based on IO and cpu cost..
How?
Language issues, Lynn. Gullimeel meant to say, "I already described the algorithm to calculate cost" i.e. that Estimated Operator Cost in a query plan is sometimes the sum of Estimated I/O Cost and Estimated CPU Cost. It's not quite like that, but let's not get sidetracked.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 2:34 pm
This is simple arithmatic that IO cost + cpu cost is the total operator cost.How optimizer is calculating IO cost or CPU cost that is explained very well in Joe Chang's article.
Why the optimizer did not sum it because?
The costs are modified by the row goal associated with the TOP operator in the plan.
http://sqlblog.com/blogs/paul_white/archive/2010/08/18/inside-the-optimiser-row-goals-in-depth.aspx
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:35 pm
RBarryYoung (5/25/2012)
I *really* wish that you would write a book with all of this stuff in it. Every time you post one of these I am just dumbstruck with how much you can coherently read from a query plan, and how much more you can infer from it.:Wow:
Yes I have long meant to write a series of blog posts on advanced execution plan interpretation (which I could then collect into a PDF or something). Problem is, as shown in this thread, most people don't know the basics well enough to take a blind bit of notice of what I'm saying 😉
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 2:37 pm
isuckatsql (5/25/2012)
I removed the reference to "AllRecentWages" and got 850k records in 10 seconds, so that was certainly a big part of the issue! Now i need to figure out how to add the data back in, without causing the same issue!
If you're not going to fix the underlying problems (poor indexing, over-complex query), just add OPTION (MAXDOP 1) to the query and move on to something else. :doze:
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 2:44 pm
Gullimeel (5/25/2012)
This is simple arithmatic that IO cost + cpu cost is the total operator cost.How optimizer is calculating IO cost or CPU cost that is explained very well in Joe Chang's article. Why the optimizer did not sum it because?The costs are modified by the row goal associated with the TOP operator in the plan.http://sqlblog.com/blogs/paul_white/archive/2010/08/18/inside-the-optimiser-row-goals-in-depth.aspx
Read the link, I explain why there. Rhetorical: This not getting sidetracked plan isn't really working out, is it?
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 2:53 pm
Gullimeel (5/25/2012)
--------------------------------------------------------------------------------
This is simple arithmatic that IO cost + cpu cost is the total operator cost.How optimizer is calculating IO cost or CPU cost that is explained very well in Joe Chang's article. Why the optimizer did not sum it because?
The costs are modified by the row goal associated with the TOP operator in the plan.http://sqlblog.com/blogs/paul_white/archive/2010/08/18/inside-the-optimiser-row-goals-in-depth.aspx
Read the link, I explain why there. Rhetorical: This not getting sidetracked plan isn't really working out, is it?
I saw your reply later :-). Regarding the link , Next week I was suppose to read that blog but now i have to read it sooner than that.
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:55 pm
SQL Kiwi (5/25/2012)
Gullimeel (5/25/2012)
This is simple arithmatic that IO cost + cpu cost is the total operator cost.How optimizer is calculating IO cost or CPU cost that is explained very well in Joe Chang's article. Why the optimizer did not sum it because?The costs are modified by the row goal associated with the TOP operator in the plan.http://sqlblog.com/blogs/paul_white/archive/2010/08/18/inside-the-optimiser-row-goals-in-depth.aspx
Read the link, I explain why there. Rhetorical: This not getting sidetracked plan isn't really working out, is it?
Answering the rhetorical question, when has a thread ever stayed strictly on topic?
May 25, 2012 at 3:39 pm
Lynn Pettis (5/25/2012)
Answering the rhetorical question, when has a thread ever stayed strictly on topic?
Not as often as it should? At least until the primary question has been answered.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 3:45 pm
SQL Kiwi (5/25/2012)
Lynn Pettis (5/25/2012)
Answering the rhetorical question, when has a thread ever stayed strictly on topic?Not as often as it should? At least until the primary question has been answered.
Some replies spark additional questions. Especially when it helps deeper understanding. I wish I could work with you for a year or so, there are things I would love to pick your brain about.
Don't always have time to read your blog, unfortunately.
May 25, 2012 at 4:07 pm
Lynn Pettis (5/25/2012)
Some replies spark additional questions. Especially when it helps deeper understanding.
Yes, I appreciate that. My rhetorical aside was really just to myself; though it does get frustrating when others (not you) start posting tangential assertions based on incomplete knowledge that we have to stop and address.
I wish I could work with you for a year or so, there are things I would love to pick your brain about.
:blush: Well thanks (though I am just another narrow specialist i.e. there are huge areas of SQL Server I have no clue about).
Don't always have time to read your blog, unfortunately.
That's ok, I don't earn anything from it 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 4:14 pm
SQL Kiwi (5/25/2012)
isuckatsql (5/25/2012)
I removed the reference to "AllRecentWages" and got 850k records in 10 seconds, so that was certainly a big part of the issue! Now i need to figure out how to add the data back in, without causing the same issue!If you're not going to fix the underlying problems (poor indexing, over-complex query), just add OPTION (MAXDOP 1) to the query and move on to something else. :doze:
I fully intend on fixing the core problem. "Do it right or dont bother"!
I just need to do some research on what my best solution will be.
I "played" with Columnstore Indexing, but it really did not help in this situation 🙁
May 26, 2012 at 12:38 pm
RBarryYoung (5/25/2012)
SQL Kiwi (5/25/2012)
isuckatsql (5/25/2012)
Actual Execution plan does not suggest anything to improve the query 🙁Several parallelism operators in the parallel plan have a warning symbol to show you that exchange buffers were spilled to tempdb, probably to resolve an intra-query parallelism deadlock. The plan also shows that your query used 260 worker threads at DOP 64, despite only processing a relatively small number of rows. There is also a warning for a conversion that will affect cardinality estimation (and therefore, plan choice). Estimated rows counts are very different from actual row counts in a number of places. Example shown below:
You can improve this query by applying normal tuning techniques. Provide useful indexes (you have many scans) and ensure row count estimates are ballpark-correct compared with actual counts. You might find it easiest to split this complex query into smaller parts, using one or more temporary tables to hold small intermediate result sets. This will give the query optimizer better information and reduce the chance of incorrect estimates resulting in a poor plan choice. It will also make the query more maintainable. You should also consider significantly reducing the degree of parallelism for this query (if not for the server as a whole).
Dang Paul! :w00t:
I *really* wish that you would write a book with all of this stuff in it. Every time you post one of these I am just dumbstruck with how much you can coherently read from a query plan, and how much more you can infer from it.:Wow:
I'll second THAT! Go for it, Paul!
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 29 (of 29 total)
You must be logged in to reply to this topic. Login to reply