November 21, 2024 at 3:32 pm
I have an application team that is insisting on daily (and for some, weekly) jobs for UPDATE STATISTICS WITH FULLSCAN on all their databases. These jobs were created years ago by a previous DBA team.
The jobs are running very long, into business hours often, one database's job is running 1.5 days, and butting heads with our corporate-implemented index rebuilds / stats updates with sampling, and sometimes butting heads with the application's own other processes.
To be clear, I had nothing to do with setting up any of the indexing / statistics jobs. They were set up before I joined this team. But now I am responsible for fixing everything and removing the blocking. As I asked the app team for permission to disable all the old jobs, there was a lot of panic. Apparently back in 2018, several databases crashed hard and when they engaged Microsoft on the ticket, they got yelled at for only sampling stats instead of regularly running FULLSCAN. MS DBAs chewed everyone out for not following MS recommendations and white papers regarding this issue.
So... I can't seem to find these white papers via Google. When I search, I find Books Online entries and misc. forum threads that don't quite answer the question I'm about to ask.
Does MS recommend regularly updating stats with FULLSCAN? If so, can anyone point me to these white papers?
All I see in BOL is "run with sampling except for circumstances where a specific workload may need FULLSCAN" with no details on exactly what that statement might mean.
November 21, 2024 at 4:39 pm
Hello Brandie,
MS Updating statitics
MS When to update:
Normally the default autoupdate will do unless in very specific cases ( you can find non-ms articles about this)
Even then you would narrow it down to the culprits instead of full scan all tables
November 21, 2024 at 5:15 pm
Stop rebuilding indexes.
Microsoft's documentation states:
Index maintenance decisions should be made after considering multiple factors in the specific context of each workload, including the resource cost of maintenance. They should not be based on fixed fragmentation or page density thresholds alone.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
November 21, 2024 at 8:22 pm
Stop rebuilding indexes.
Microsoft's documentation states:
Index maintenance decisions should be made after considering multiple factors in the specific context of each workload, including the resource cost of maintenance. They should not be based on fixed fragmentation or page density thresholds alone.
Heh... I stopped listening to MS about index maintenance on Monday, the 18th of January 2016. I think the link above is where they old 5/30 recommendation was. They still haven't removed the 5/30 code from sys.dm_db_index_physical_stats documentation. They also have some sort of a "flexible" bit of code that they direct us to on GitHub. I don't follow that code either.
As for stats, the only place you start to see real time savings is below a 20% sample rate. Things really move along at 10% and that seems good enough unless you have some seriously skewed data. Knit picking the stats sample rate above 25% doesn't make what I'd consider to be a substantial difference so, if you need to handle a skewed stat, might as well go full Monty with a FULL scan.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 22, 2024 at 3:57 pm
Thank you, everyone, for your input.
Now would a high-volume OLTP (lots of daily inserts or deletes or both) make a difference in your opinion?
November 22, 2024 at 4:07 pm
Stop rebuilding indexes.
Microsoft's documentation states:
Index maintenance decisions should be made after considering multiple factors in the specific context of each workload, including the resource cost of maintenance. They should not be based on fixed fragmentation or page density thresholds alone.
I disagree with "stop rebuilding indexes." The quote you provided doesn't seem to support that statement if I take it as is, no caveats.
Regardless, we already have regular checkups on our index rebuilding process to keep it fit for purpose so I'm not going to bother with that in this discussion. My focus is on the update stats part where we're running this for stats not affected by the rebuilds and multiple colliding jobs from DBAs of Christmas Past.
November 22, 2024 at 4:13 pm
Jo, I have already read through both links you have provided in this chat. I am looking for white papers beyond this information.
EDIT: The application team member was very specific about his white papers reference, though he has no knowledge of the links or remembers as far back as 2018 to what the reference was about. That's why I'm trying to find extra information on the subject.
November 24, 2024 at 8:51 pm
I disagree with "stop rebuilding indexes." The quote you provided doesn't seem to support that statement if I take it as is, no caveats.
Regardless, we already have regular checkups on our index rebuilding process to keep it fit for purpose so I'm not going to bother with that in this discussion. My focus is on the update stats part where we're running this for stats not affected by the rebuilds and multiple colliding jobs from DBAs of Christmas Past.
Like a younger me, you don't understand, Brandie. The index maintenance is frequently responsible for more page splits, especially on the morning after and you haven't actually proven that index maintenance is actually doing anything good for you other than the rebuilds doing stats updates. And, if you look at all the BBFAATT (Books, Blogs, Forums, Articles, AI, 'Tubes, and Talks), almost all of them talk about how index fragmentation CAN cause performance problems and how Index Maintenance CAN fix those problems, but only 3 of them do anything to try to actually prove it and all of the 3 suggested that opposite of that the minuscule benefits are worth it.
Try this... do your baseline the day after your index maintenance and then go a month without doing any index maintenance. Just rebuild stats for a week or two. Then, do another baseline. I believe (like it did for me for nearly 4 years and for Michael, Ed Wagner, and several other converts for that length of time and, like me now, for much longer) that you won't see any serious degradation and you might, in fact, see an improvement because you've eliminated a big reason for the page splits.
The only reason why I do any index maintenance is for space recovery, which affects backups, restores, and memory. Like some folks are saying (and I don't have the links anymore or I'd use their names), "If I had enough memory, I'd never do index maintenance every again".
BTW... the reason why Michael is suggesting it is because I suggested it to him... he tried it and it worked great. Ed Wagner and a few others have had the same good results.
Like I tell people in my introductory presentation on the subject, "It's better to do no index maintenance than it is to do it wrong... and more that 95% of the people doing index maintenance are doing it wrong. 😀 (Especially if your maintenance generically uses REORGANIZE, which doesn't work like most people thing it does).
And, yes, I do have code that proves the index maintenance vs the stats maintenance thing and I'm slowly putting an introductory article together on that.
And, yea.. I have a presentation that proves the REOGANIZE can perpetuate and be a primary cause of fragmentation. I use Random GUIDs to explain that one and, in the end, we can go for MONTHS of inserting 100,000 rows per day into a Random GUID index without going over 1% logical fragmentation. Here's the link for that and remember... contrary to the title, it's NOT just about Random GUIDs.
Also, if you're listening with a headset, be aware that they added 3 advertisements at the 15:00, 30:15, and 45:25. The advertisements are important to help support this event but I wanted to let you know that they're sudden and they're loud! They WILL lift your headset!
https://www.youtube.com/watch?v=rvZwMNJxqVo
--Jeff Moden
Change is inevitable... Change for the better is not.
November 25, 2024 at 3:02 pm
Jeff, thank you for that information. I will research.
So we are using the Hallogren solution for index maintenace and yes, it involves reorgs. I removed reorgs from the servers I'm having issues with because they aren't really buying us anything for stats. And we're testing a solution for only updating stats that have changed.
Also, I'm working on two servers with high OLTP and too much workload which frequently fragment a handful of indexes to the 90%.
I'm not sure I can go cold turkey on the indexes, definitely not in production, because this is not something I fully control. It is worth a look, however, and I am going to pass along your comments to one of the managers in charge of this effort.
November 25, 2024 at 3:03 pm
Oh, and can I get a favor? Can everyone to update their posts with the proper spelling of my name? Brandie.
November 25, 2024 at 3:38 pm
Jeff, thank you for that information. I will research.
So we are using the Hallogren solution for index maintenace and yes, it involves reorgs. I removed reorgs from the servers I'm having issues with because they aren't really buying us anything for stats. And we're testing a solution for only updating stats that have changed.
Also, I'm working on two servers with high OLTP and too much workload which frequently fragment a handful of indexes to the 90%.
I'm not sure I can go cold turkey on the indexes, definitely not in production, because this is not something I fully control. It is worth a look, however, and I am going to pass along your comments to one of the managers in charge of this effort.
Going cold turkey was exactly what I did. The operation was 24/7. the maintenance would take 12-14 hours each night. The users in Asia complained daily. In the morning, the users in North America complained till about 2 PM. We averaged 25 problem tickets a WEEK related to database issues. These were all OLTP databases. The largest was 1.5 TB, rest were in the 500Gb range.
I stopped reindexing. I had nothing to lose. I created a script that updated only changed stats, and updated individual stats instead of the whole table. In a three year period after making that change, the number of problem tickets related to database issues went to ZERO.
The generic concept is that scanning additional data pages is far less overhead than the issues caused by page splits and the overhead of reindexing. The fragmentation, even at 99%, is actually a natural fill factor.
Look up "Jeff Moden's Black Arts of Index Maintenance". Take 2 hours to watch that. Capture the page splits that occur after the rebuild.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
November 25, 2024 at 3:51 pm
So I am doing some major learning today. And here I thought I could take it easy on the week of Turkeys! @=)
Thank you both. I have already forwarded the email with Jeff's post to my coworkers and as soon as I finish resolving a few other things, I will be jumping on the webs to do the learnings.
Becuz I wantz to knowz thangs and I am armed with my DBCC TIMEWARP() function!
November 25, 2024 at 8:07 pm
Wow. Just ... WOW. Watched the 1.2. Now going to watch 3.
Thanks, Jeff.
November 25, 2024 at 9:19 pm
Now watching Black Arts # 3 and I just have to ask..
Jeff, where do we get your f'n Tally Function. @=) (j/k).
But on a serious note, do you have that IndexDNA tool packaged for sale anywhere?
November 26, 2024 at 11:53 pm
It's free. Like it says in the presentation, send me an email to the email address I provided and I'll send you the zip file that I give folks when they attend a presentation. Once I'm sure you have it, I'll drop your email address. You'll have an fn good time with it. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply