November 28, 2017 at 6:30 am
leehbi - Tuesday, November 28, 2017 6:00 AMPlans attached. I will explore extended events. Good tip about SQL Sentry Plan Explorer.
Those don't actually help. At least the "slow query" version appears to be an "estimated plan" rather than an "actual plan". While there is usually little difference, there are enough tables in this bad boy where there could be a huge number of differences. We need to see the actual plans, please.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 28, 2017 at 7:56 am
The 2 plans came via sp who - I've ran the query to get the actual plan see attached.
November 28, 2017 at 12:44 pm
Grant Fritchey - Tuesday, November 28, 2017 4:30 AMJason A. Long - Monday, November 27, 2017 11:47 AMI'll 2nd that... Plan Explorer really is an awesome tool that provides a ton more information than SSMS and it presents that information far more usable format (IMO).I am a little torn on the "far more usable format" bit. It can be. However, since it doesn't, by default, display all properties, we're dependent on them picking the right ones to display. I think they largely nail it, but I'll bet there are misses in there. It's my one critique since we are so dependent on those properties when you really start working with plans.
No arguments from me on anything you've said here. I suppose I should have said SSMS's execution plan view, rather than SSMS...
I never meant to imply that you couldn't use SSMS to roll your own solution using TSQL, DMOs and Extended Events... That said, given the amount of information that PE provides, creating a "roll your own", that beats it, wouldn't be a trivial undertaking.
November 28, 2017 at 3:35 pm
leehbi - Tuesday, November 28, 2017 7:56 AMThe 2 plans came via sp who - I've ran the query to get the actual plan see attached.
Was that a good fast run or a slow run?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 29, 2017 at 2:50 am
slow query
November 30, 2017 at 2:15 am
Very lucky to be able to adjust the underlying data structure (Data warehouse) - by reducing the plan complexity this problem looks to have gone away. Definitely an odd one - I would not like to be a DBA with a fixed OLTP system.
November 30, 2017 at 12:43 pm
Are your Statistics up to date?
I see it estimated 1.5 m rows but the actual was 4.2 m rows.
Then the sort spilled to TempDB.
December 1, 2017 at 3:51 am
I spotted that but the cardinality issue was consistent in each fast and slow query plan. I didn't expect it cause the deviance in performance especially as the data is very similar in size. The plans are now simpler and more consistent which is a good result.
December 1, 2017 at 7:50 am
leehbi - Thursday, November 30, 2017 2:15 AMVery lucky to be able to adjust the underlying data structure (Data warehouse) - by reducing the plan complexity this problem looks to have gone away. Definitely an odd one - I would not like to be a DBA with a fixed OLTP system.
What adjustments did you actually make?
--Jeff Moden
Change is inevitable... Change for the better is not.
December 5, 2017 at 3:51 am
I denormalised some of the underlying tables - this resulted in a much simpler plan. Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.
December 5, 2017 at 5:35 pm
leehbi - Tuesday, December 5, 2017 3:51 AMI denormalised some of the underlying tables - this resulted in a much simpler plan. Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.
Oh, be careful ... I've found that it has nothing to do with big tables... it usually has to do with bad code, missing indexes, non-SARGable code, returning too much data behind the scenes, accidental many-to-many joins covered up by a DISTINCT or GROUP by, people trying to do it all in a single monster query be cause they incorrectly think "Set Based" means "all in one query", accidental RBAR in the form of rCTEs, non-iTVF functions, joining to calculated columns in views, etc, etc, but no problems with it handling big tables.
To be honest, you've probably created a future "world of hurt" by denormalizing tables to make your query work.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 5, 2017 at 6:42 pm
3 obvious problems from the plans:
1) Key lookup on 240k recordset (Object3). Certainly wrong choice of the clustered index.
2) 4.3 mil rows scanned from Object4.Index4 only to be filetered down to 60k rows relevant to the Hash Join. Smells like an implicit conversion or a function applied to the joined column.
3) 11GB are read from Object2.Index6 following by some kind of data transformation preparing for a join to Object6 which leaves only 3.6MB after the matching. Seems like the same kind of error like in 2).
BTW, fast and slow plans look identical.
I'd suspect that fast runs happen when the cached data from previous runs are still sitting in memory, so there is no need to re-read 12GB of data from disk.
_____________
Code for TallyGenerator
December 6, 2017 at 1:45 am
leehbi - Tuesday, December 5, 2017 3:51 AMI denormalised some of the underlying tables - this resulted in a much simpler plan. Lesson I learnt here is that SQL is great at handling our data most of the time but when it comes to big tables we need to give it a hand.
What do you mean by "big"? This morning I'm working with a table which has over 12,000,000,000 rows. Performance is fine.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
Viewing 13 posts - 16 through 27 (of 27 total)
You must be logged in to reply to this topic. Login to reply