June 10, 2020 at 8:12 am
Hi Experts,
We have a stored procedure to search users. In order to improve performance i have rebuild the index. But the performance of the query degraded. I have attached the actual execution plan before and after index rebuild.
Can you help in analyzing why the performance degraded.
June 10, 2020 at 1:28 pm
First up, defragging indexes isn't a great way to go about enhancing performance. In fact, the argument can be made that defragging hurts performance. If the index is fragmenting due to lots of data changes, letting it simply take up a little more room can enhance performance because you're not constantly resplitting pages, etc.
However, the most likely culprit here is not that you defragmented the index. The most likely culprit is that, as part of defragging the index, the statistics were updated. Although, as labeled, you're Before is running in 1156ms and your After is running in 157. Maybe they're mislabeled?
Anyway, changes to statistics leads to changes to plan shapes. If you look at all three plans, they're almost identical. Except for one join and one scan in what's labeled as the Before plan. It's using a Merge Join, Node ID 21, in the Before plan. The scan of [PARISQA_04].[dbo].[tc_stk].[taxid_idx] [A] is ordered in the Before plan. If you look at the After plan (which, according to the plan ran faster, not slower), you'll see a Hash match and an unordered scan. That's the difference. In the after plan, the row estimate is perfect to the actual rows. In the before plan, it's slightly off. That's the only real difference I can spot that would lead to changes in behavior. Otherwise, everything is the same.
I looked at all the other time metrics on the two sets of plans. If you're accurate in the labelling, the elapsed time for the After plans are a lot faster in two of the plans and only very slightly slower in one. Are you sure you don't have them backwards?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 10, 2020 at 2:50 pm
However, the most likely culprit here is not that you defragmented the index. The most likely culprit is that, as part of defragging the index, the statistics were updated.
I have to seriously disagree with that. I rebuild statistics on my main databases every night and have been doing so for a very long time. Having statistics update does cause recompiles but that's usually not a problem. Doing the wrong kind of index maintenance (also known as "Current Best Practices") was the actual problem and, if you check for page splits and blocking over time, you'll see that's true.
The OP also asked a nearly identical question about having the same problem with 3000 tables. Please see my extended answer at the following link as well as the other posts I posted below that.
https://www.sqlservercentral.com/forums/topic/index-fill-factor-5#post-3759082
--Jeff Moden
Change is inevitable... Change for the better is not.
June 10, 2020 at 2:53 pm
@arvind.patil 98284,
Why are you asking the same basic question you asked on the other thread of yours? The answer hasn't changed just because you changed the problem from 3000 tables to just one. Please refer to my previous response on your previous thread at the following link.
https://www.sqlservercentral.com/forums/topic/index-fill-factor-5#post-3759082
--Jeff Moden
Change is inevitable... Change for the better is not.
June 10, 2020 at 3:12 pm
Grant Fritchey wrote:However, the most likely culprit here is not that you defragmented the index. The most likely culprit is that, as part of defragging the index, the statistics were updated.
I have to seriously disagree with that. I rebuild statistics on my main databases every night and have been doing so for a very long time. Having statistics update does cause recompiles but that's usually not a problem. Doing the wrong kind of index maintenance (also known as "Current Best Practices") was the actual problem and, if you check for page splits and blocking over time, you'll see that's true.
The OP also asked a nearly identical question about having the same problem with 3000 tables. Please see my extended answer at the following link as well as the other posts I posted below that.
https://www.sqlservercentral.com/forums/topic/index-fill-factor-5#post-3759082
You know I don't argue with you bud. However, what he's showed here is a changed execution plan. I'm a little unclear which was before and which was after, but there's absolutely a change. That change also resulted in a change in performance. Now, if he labeled them correctly, it got faster, not slower (in fact, I think that's true because the estimates on the faster query are more accurate, not less, but we only have what @arvind.patil 98284 says about it to go on). That's statistics changes resulting in a plan difference. And you know, I know, statistics updates are generally (not 100% of the time, 99.x%) great things.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 10, 2020 at 3:41 pm
I would also query the MAXDOP and (CTFP) settings - you seem to have a server with 64 HT (2 numa nodes) and you seem to either have a MAXDOP of 0 (zero) or 32 -- this assuming i read it correctly.
I've very rarely seen a server where having that max dop setting would be better than having a value following the "recomended" guidelines https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option?view=sql-server-ver15 - do note that even though it says
Server with multiple NUMA nodes
Less than or equal to 16 logical processors per NUMA node
Keep MAXDOP at or below # of logical processors per NUMA node
where many would do the same number as logical procs (eg. on a server with 2 Numas, 16 threads per numa) I have found that sticking with MaxDop 8 was normally better than going with 16.
obviously if your company has tested this and proved that your current setting is optimal please ignore this.
and missing index on tc_system_code could help a bit - specially if it avoids the table spool.
but as Grant said if your before/after names are correct then the after was faster
June 10, 2020 at 4:09 pm
Hi Guys ,
I have a very basic question regarding comparing 2 execution plans .
While comparing the execution plan i usually see Estimated sub tree cost of the left most operator of the both the plans . I never compare their over all timings .
But recently i have found that sub tree cost of the left most operator was more but the execution timings was less compared to another plan.
Note i do check the inner details of the plan . But what i want to know is which key metrics should i check at the higher level to find out which plan is faster.
June 10, 2020 at 4:29 pm
Hi Guys ,
I have a very basic question regarding comparing 2 execution plans .
While comparing the execution plan i usually see Estimated sub tree cost of the left most operator of the both the plans . I never compare their over all timings .
But recently i have found that sub tree cost of the left most operator was more but the execution timings was less compared to another plan.
Note i do check the inner details of the plan . But what i want to know is which key metrics should i check at the higher level to find out which plan is faster.
There is NOTHING in an Estimated or even Actual Execution Plan that you can compare to another execution plan that will positively give you the right answer because ALL execution plans, including the Actual Execution Plan, contain estimates. Granted, they're great for see what is happening and may lead you in the right direction but the ONLY way to accurately determine which of (for example) two or three different code snippets is actually the best is to execute them and, with as little interference as possible, measure both their resource usage and their duration.
I've demonstrated many times (once for Grant Fritchey while he was writing his book on execution plans) that it's not difficult to write code that says Code-A will take nearly 100% of the time and Code-B will take nearly 0% only to have the exact opposite be true when you actually run the code.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 10, 2020 at 4:58 pm
Hi Guys ,
I have a very basic question regarding comparing 2 execution plans .
While comparing the execution plan i usually see Estimated sub tree cost of the left most operator of the both the plans . I never compare their over all timings .
But recently i have found that sub tree cost of the left most operator was more but the execution timings was less compared to another plan.
Note i do check the inner details of the plan . But what i want to know is which key metrics should i check at the higher level to find out which plan is faster.
What Jeff says. The estimated values are just that, calculated estimations of behavior. They're not the behavior itself.
The thing about execution plans is that they do describe behavior. They don't however measure behavior. Well, if you capture a plan, plus it's runtime metrics, you get the measure, but that's the runtime metrics, not the plan. Also, to get accurate measurements, better to capture the metrics independently of capturing the plan (and turn off plan capture while doing that).
For lots and lots more detail on this topic, get a copy of my book. It's free to download. You have to pay for a paper copy.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply