August 5, 2016 at 12:26 pm
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
I forgot to mention that in these plans there will be a missing index recommendation on the NewsletterHistory. I don't understand the recommendation because the only place I can find in the plan for that table there is already an index seek on a non-clustered index. I didn't want to create what would essentially have been a duplicate index.Look at the properties of that seek and see how many times it's executed.
247 times.
Mssing Index Recommendation -
USE [MortgageReturns4]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[NewsletterHistory] ([NewsletterID],[ContactRecordID])
INCLUDE ([DateSent])
GO
Index it is using -
CREATE INDEX [IX_NewsletterHistory_NewsletterId_CustomerId_ContactRecordID_StatusID_INCL] ON [dbo].[NewsletterHistory]
( [NewsletterID], [CustomerID], [ContactRecordID], [StatusID] )
INCLUDE ( [DateSent]) WITH (FILLFACTOR=100);
The difference here is essentially the order of where ContactRecordId appears in the index.
Unless it's an absolutely huge table, I'd try the recommended index just to see if it were a reasonable recommendation for the query you're having problems with. If it's not, drop it and make a better human guess.
If it does help a lot, remember it but still drop it. Then find the query(ies) that the existing index supports and check them. Then, change the order of that index and see if the queries still perform at least as well. If they do, then see if the query you're talking about also benefitted. If it does, you're done. If it doesn't, then you may actually need the extra index.
The table has 109 million rows. 16 GB of index space and 9 GB of data space. I can try creating this index after hours then seeing if things get better.
August 5, 2016 at 1:53 pm
lmarkum (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
I forgot to mention that in these plans there will be a missing index recommendation on the NewsletterHistory. I don't understand the recommendation because the only place I can find in the plan for that table there is already an index seek on a non-clustered index. I didn't want to create what would essentially have been a duplicate index.Look at the properties of that seek and see how many times it's executed.
247 times.
Mssing Index Recommendation -
USE [MortgageReturns4]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[NewsletterHistory] ([NewsletterID],[ContactRecordID])
INCLUDE ([DateSent])
GO
Index it is using -
CREATE INDEX [IX_NewsletterHistory_NewsletterId_CustomerId_ContactRecordID_StatusID_INCL] ON [dbo].[NewsletterHistory]
( [NewsletterID], [CustomerID], [ContactRecordID], [StatusID] )
INCLUDE ( [DateSent]) WITH (FILLFACTOR=100);
The difference here is essentially the order of where ContactRecordId appears in the index.
So you can't seek on your index for the columns the MIS recommends. CustomerID is "in the way".
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 5, 2016 at 1:54 pm
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
I forgot to mention that in these plans there will be a missing index recommendation on the NewsletterHistory. I don't understand the recommendation because the only place I can find in the plan for that table there is already an index seek on a non-clustered index. I didn't want to create what would essentially have been a duplicate index.Look at the properties of that seek and see how many times it's executed.
247 times.
Mssing Index Recommendation -
USE [MortgageReturns4]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[NewsletterHistory] ([NewsletterID],[ContactRecordID])
INCLUDE ([DateSent])
GO
Index it is using -
CREATE INDEX [IX_NewsletterHistory_NewsletterId_CustomerId_ContactRecordID_StatusID_INCL] ON [dbo].[NewsletterHistory]
( [NewsletterID], [CustomerID], [ContactRecordID], [StatusID] )
INCLUDE ( [DateSent]) WITH (FILLFACTOR=100);
The difference here is essentially the order of where ContactRecordId appears in the index.
Unless it's an absolutely huge table, I'd try the recommended index just to see if it were a reasonable recommendation for the query you're having problems with. If it's not, drop it and make a better human guess.
If it does help a lot, remember it but still drop it. Then find the query(ies) that the existing index supports and check them. Then, change the order of that index and see if the queries still perform at least as well. If they do, then see if the query you're talking about also benefitted. If it does, you're done. If it doesn't, then you may actually need the extra index.
IIRC the index suggestion benefit wasn't actually that high, right?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 5, 2016 at 1:55 pm
lmarkum (8/5/2016)
I am using Ola Hallengren to handle the index and stats maintenance.
That's GREAT, but it actually doesn't answer the question. 😎
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 5, 2016 at 4:35 pm
TheSQLGuru (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
I forgot to mention that in these plans there will be a missing index recommendation on the NewsletterHistory. I don't understand the recommendation because the only place I can find in the plan for that table there is already an index seek on a non-clustered index. I didn't want to create what would essentially have been a duplicate index.Look at the properties of that seek and see how many times it's executed.
247 times.
Mssing Index Recommendation -
USE [MortgageReturns4]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[NewsletterHistory] ([NewsletterID],[ContactRecordID])
INCLUDE ([DateSent])
GO
Index it is using -
CREATE INDEX [IX_NewsletterHistory_NewsletterId_CustomerId_ContactRecordID_StatusID_INCL] ON [dbo].[NewsletterHistory]
( [NewsletterID], [CustomerID], [ContactRecordID], [StatusID] )
INCLUDE ( [DateSent]) WITH (FILLFACTOR=100);
The difference here is essentially the order of where ContactRecordId appears in the index.
Unless it's an absolutely huge table, I'd try the recommended index just to see if it were a reasonable recommendation for the query you're having problems with. If it's not, drop it and make a better human guess.
If it does help a lot, remember it but still drop it. Then find the query(ies) that the existing index supports and check them. Then, change the order of that index and see if the queries still perform at least as well. If they do, then see if the query you're talking about also benefitted. If it does, you're done. If it doesn't, then you may actually need the extra index.
IIRC the index suggestion benefit wasn't actually that high, right?
Sometimes, it's as incorrect in that area as the recommendation itself.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 5, 2016 at 4:35 pm
TheSQLGuru (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
Jeff Moden (8/5/2016)
lmarkum (8/5/2016)
I forgot to mention that in these plans there will be a missing index recommendation on the NewsletterHistory. I don't understand the recommendation because the only place I can find in the plan for that table there is already an index seek on a non-clustered index. I didn't want to create what would essentially have been a duplicate index.Look at the properties of that seek and see how many times it's executed.
247 times.
Mssing Index Recommendation -
USE [MortgageReturns4]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[NewsletterHistory] ([NewsletterID],[ContactRecordID])
INCLUDE ([DateSent])
GO
Index it is using -
CREATE INDEX [IX_NewsletterHistory_NewsletterId_CustomerId_ContactRecordID_StatusID_INCL] ON [dbo].[NewsletterHistory]
( [NewsletterID], [CustomerID], [ContactRecordID], [StatusID] )
INCLUDE ( [DateSent]) WITH (FILLFACTOR=100);
The difference here is essentially the order of where ContactRecordId appears in the index.
Unless it's an absolutely huge table, I'd try the recommended index just to see if it were a reasonable recommendation for the query you're having problems with. If it's not, drop it and make a better human guess.
If it does help a lot, remember it but still drop it. Then find the query(ies) that the existing index supports and check them. Then, change the order of that index and see if the queries still perform at least as well. If they do, then see if the query you're talking about also benefitted. If it does, you're done. If it doesn't, then you may actually need the extra index.
IIRC the index suggestion benefit wasn't actually that high, right?
Estimated benefit was 86,736, 2,719 uses Impact 31.9% Avg Query Cost 1.0
And sorry about the out of context post about Ola's script. I was trying to quote someone who was asking me about making sure stats were updating and I wasn't patient enough for the quote feature to work. I didn't realize that the browser was still trying to execute the Quote function and I just hit Post instead.
On that note, I did just come across something in a script I was reviewing that I remember seeing before. STATISTICS_NORECOMPUTE is set to ON for some CREATE INDEX statements in our environment. If I understood correctly from some articles I read, that should mean that AUTO_UPDATE STATS would be prevented from updating stats on those indexes. I will check to see if that is the case for any of the ones used by this query, and I will change that to OFF on all the DDL for the indexes database wide.
August 5, 2016 at 5:14 pm
It was me that talked about stats and Kevin is correct. Even though you're using Ola's fine code, you've not verified the condition of the stats on the table(s) in question. I'd make sure that they're up to date.
Kevin also suggested a couple of other things and even the execution plan is making a similar suggestion. The implicit conversions may be killing this.
I also notice that a good number of the index seeks are single row seeks executing 391 times. Once you get the stats up to speed and recompile this bad boy, it will be interesting to see what happens with those RBAR seeks. Keep in mind that the index recommendation is only 20% of the estimated load. All that small stuff makes up the other 80% or so.
While this doesn't qualify as the proverbial "monster join" table, it may well be worth your while to divvy it up using "Divide'n'Conquer" methods that store an interim result in a Temp Table. Contrary to popular belief, "Set Based" does not mean "All in one query". I have seen it where the simple "pre-isolation" of the core rows into a Temp Table and then joining to that Temp Table to get the rest of the stuff took a server crippling 3 hour query down to just several seconds.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 5, 2016 at 7:45 pm
Jeff Moden (8/5/2016)
While this doesn't qualify as the proverbial "monster join" table, it may well be worth your while to divvy it up using "Divide'n'Conquer" methods that store an interim result in a Temp Table. Contrary to popular belief, "Set Based" does not mean "All in one query". I have seen it where the simple "pre-isolation" of the core rows into a Temp Table and then joining to that Temp Table to get the rest of the stuff took a server crippling 3 hour query down to just several seconds.
I've done that MANY times over the years too. And it can also avoid the old "runs like a bat out of hell sometimes and like a 3-legged dog others" syndrome too.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 5, 2016 at 8:29 pm
TheSQLGuru (8/5/2016)
Jeff Moden (8/5/2016)
While this doesn't qualify as the proverbial "monster join" table, it may well be worth your while to divvy it up using "Divide'n'Conquer" methods that store an interim result in a Temp Table. Contrary to popular belief, "Set Based" does not mean "All in one query". I have seen it where the simple "pre-isolation" of the core rows into a Temp Table and then joining to that Temp Table to get the rest of the stuff took a server crippling 3 hour query down to just several seconds.
I've done that MANY times over the years too. And it can also avoid the old "runs like a bat out of hell sometimes and like a 3-legged dog others" syndrome too.
'Zactly!
--Jeff Moden
Change is inevitable... Change for the better is not.
August 5, 2016 at 11:47 pm
Kevin and Jeff: Thank you both for the time you have spent on this thread.
I did find 165 indexes that had the NO_RECOMPUTE option on the stats. I am in the process of correcting that setting by switching it to OFF and rebuilding the indexes. Some very large tables that are critical to the application had the setting configured to ON, so stats were not updating very often at all. I will spend the better part of this weekend getting this work done.
I will definitely look into all the suggestions about re-writing the offending query.
August 6, 2016 at 10:01 am
lmarkum (8/5/2016)
Kevin and Jeff: Thank you both for the time you have spent on this thread.I did find 165 indexes that had the NO_RECOMPUTE option on the stats. I am in the process of correcting that setting by switching it to OFF and rebuilding the indexes. Some very large tables that are critical to the application had the setting configured to ON, so stats were not updating very often at all. I will spend the better part of this weekend getting this work done.
I will definitely look into all the suggestions about re-writing the offending query.
Changing that setting may not actually help on the larger tables at all. IIRC, something like 20% of the rows in the table need to suffer a change before stats are auto-magically rebuilt and, of course, that will happen when you can least afford it.
Save yourself some time, effort, and log space. Make a job that rebuilds the stats during a "quiet time" on the table.
As a bit of a sidebar, I've got some fairly large nearly 100GB tables in many of my systems. I took Brent Ozar's hypothetical suggestion and stopped all index maintenance on my "money maker" system way back on 18 Jan 2016 and ONLY update states. I thought he was drinking bong water with 2 straws and expected massive timeouts, database growth, and a bunch of other problems but, much to my surprise, the "natural fill factor" due to the occasion page split appears to have stopped all the blocking problems that I had after index rebuilds, performance increased, and overall CPU usage dropped from 12-18% down to 3-8% over time. Equally surprisingly, the batch jobs either stayed the same for performance or also had performance increases.
Kendra Little also wrote about it and she discovered that the problems that she was solving by rebuilding indexes wasn't due to the index rebuild itself. It was due to the fact that index rebuilds inherently cause a stats update.
It may not be for everyone and it certainly flies in the face of all that we've had drilled into us but, again, I've not done a lick of index maintenance since 18 Jan 2016 and actually saw a significant improvement in across-the-board performance. You DO have to rebuild stats that need it, though. And, don't get caught in the trap of rebuilding stats based on age of the last stat rebuild... a table that has suffered no changes does not need its stats updated.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 6, 2016 at 1:28 pm
lmarkum (8/5/2016)
Kevin and Jeff: Thank you both for the time you have spent on this thread.I did find 165 indexes that had the NO_RECOMPUTE option on the stats. I am in the process of correcting that setting by switching it to OFF and rebuilding the indexes. Some very large tables that are critical to the application had the setting configured to ON, so stats were not updating very often at all. I will spend the better part of this weekend getting this work done.
I will definitely look into all the suggestions about re-writing the offending query.
BE CAREFUL THERE!!! Stats updates on very large tables can definitely harm system performance. The real issue there is that they will do it whenever they cross that 20%+500 rows threshold. And if that is a bad time for your system to be hammering the IO and RAM then tough luck for you. Even if you are 24/7/365 there are often times on such systems where there is less activity than others and you can schedule your manual updates for those times, especially on the really big tables.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 6, 2016 at 6:59 pm
TheSQLGuru (8/6/2016)
lmarkum (8/5/2016)
Kevin and Jeff: Thank you both for the time you have spent on this thread.I did find 165 indexes that had the NO_RECOMPUTE option on the stats. I am in the process of correcting that setting by switching it to OFF and rebuilding the indexes. Some very large tables that are critical to the application had the setting configured to ON, so stats were not updating very often at all. I will spend the better part of this weekend getting this work done.
I will definitely look into all the suggestions about re-writing the offending query.
BE CAREFUL THERE!!! Stats updates on very large tables can definitely harm system performance. The real issue there is that they will do it whenever they cross that 20%+500 rows threshold. And if that is a bad time for your system to be hammering the IO and RAM then tough luck for you. Even if you are 24/7/365 there are often times on such systems where there is less activity than others and you can schedule your manual updates for those times, especially on the really big tables.
The 20% + 500 rows threshold is much lower with trace flag 2371 on, which I have enabled. However, I understand your point. If I change NORECOMPUTE to OFF on a large table then I could end up with stats updates in the middle of the day when the system is the busiest. I did look at a recent article by Erik Darling of Brentozar.com where he talked about just updating the stats using Ola's script. I have created a job that will run after hours to implement that approach.
August 7, 2016 at 9:32 am
Correct - TF2371 makes it MUCH more likely you will get stats updates at times when your application(s) don't need the additional burden on the server. 🙂
Note: I'm saying one should NOT enable TF2371 if they are NOT doing proper statistics updates. I'm primarily saying that a) they need to be aware of what could happen when they do and more importantly they REALLY should be DOING proper statistics updates. 😎
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 7, 2016 at 10:29 am
TheSQLGuru (8/7/2016)
...more importantly they REALLY should be DOING proper statistics updates.
+ 1,000,000!
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 31 total)
You must be logged in to reply to this topic. Login to reply