April 12, 2016 at 5:39 pm
I'm currently in the process of looking to implement a index strategy for the first time and as I've never done this before I would just like to share some questions and hopefully understand if this is the right way or if there is a better process.
I've been using MSDN, TechNet & also some online guides to get me in the right direction. The % that being passed around as a guide are below:
1. If frag > 5% and < = 30% then ALTER INDEX REORGANIZE
2. If frag > 30% then ALTER INDEX REBUILD WITH (ONLINE = ON)
Based on the piece of SQL code below say it returns 1000 indexes in the criteria:
SELECT
OBJECT_NAME(ind.OBJECT_ID) AS TableName,
ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind
ON ind.object_id = indexstats.object_id
AND ind.index_id = indexstats.index_id
WHERE
indexstats.avg_fragmentation_in_percent >10-- using 10as an example....
ORDER BY
indexstats.avg_fragmentation_in_percent DESC
1. If all are above 10%, does this mean I should "Reorganize" or potentially "Rebuild (if I decided to)" all 1000 or is it overkill?
2. Should I only do a small subset of say 100 indexes a night and let my job run once a day for 10 days in case it causes performance issues? should this process be avoided during office hours?
3. Should I only target the ones that are most frequently used? if so how do I know that these are definitely the ones. What is it that I'm looking for as a reason? reads/writes/fill factor?
sorry for the questions, just want to target this correctly first time around and understand the process going forward so I know best practice.
April 12, 2016 at 9:16 pm
If it's less than 30 try to reorganize and if greater than 30 just rebuild.
This query should be more appropriate for you and will give you the command as well
SELECT
OBJECT_NAME(ind.OBJECT_ID) AS TableName,SCHEMA_NAME(obj.schema_id) as SchemaName,
ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent,
CASE WHEN indexstats.avg_fragmentation_in_percent between 10 and 30
THEN 'Reorganize the Index'
WHEN indexstats.avg_fragmentation_in_percent > 30 THEN 'Rebuild The Index'
ELSE 'Ignore'
END AS [Comments],
CASE WHEN indexstats.avg_fragmentation_in_percent between 10 and 30
THEN concat('ALTER INDEX ',ind.name,' ON ',SCHEMA_NAME(obj.schema_id), '.',OBJECT_NAME(ind.OBJECT_ID),' REORGANIZE')
WHEN indexstats.avg_fragmentation_in_percent > 30 THEN concat('ALTER INDEX ',ind.name,' ON ',SCHEMA_NAME(obj.schema_id), '.',OBJECT_NAME(ind.OBJECT_ID),' REBUILD WITH (ONLINE = ON)')
ELSE NULL
END AS [Command]
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind
ON ind.object_id = indexstats.object_id
AND ind.index_id = indexstats.index_id
INNER JOIN sys.objects obj ON obj.object_id = ind.object_id
WHERE indexstats.avg_fragmentation_in_percent > 10
ORDER BY
indexstats.avg_fragmentation_in_percent DESC
I would suggest to narrow down the tables which are more commonly used for select for fixing fragmentation issues, why to worry for fragmentation if the table is mostly an insert only table
April 12, 2016 at 11:25 pm
Hi,
you can take a look to the scripts vom Ola Hallengren:
https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
The index rebuild / reorg depends on the count of pages also.
Kind regards,
Andreas
April 13, 2016 at 12:40 am
Also have a look at the following as an maintenance plan which you can use or only pieces for the index optimizing that you are interested in: https://ola.hallengren.com/ it is an free tool.
Regarding your questions:
1. If all are above 10%, does this mean I should "Reorganize" or potentially "Rebuild (if I decided to)" all 1000 or is it overkill? As already stated in previous replies 30% and less for reorganize if you can and then 30% and above rebuild.
2. Should I only do a small subset of say 100 indexes a night and let my job run once a day for 10 days in case it causes performance issues? should this process be avoided during office hours? Depending on the size of the indexes it could take a while to complete so yes not suggested to run during office hours as to impact performance.
3. Should I only target the ones that are most frequently used? if so how do I know that these are definitely the ones. What is it that I'm looking for as a reason? reads/writes/fill factor? You can use sys.dm_db_index_usage_stats to find most used indexes. The ones with the high user scans and seeks are usually the most used indexes.
April 13, 2016 at 1:44 am
reinhardcilliers (4/13/2016)
2. Should I only do a small subset of say 100 indexes a night and let my job run once a day for 10 days in case it causes performance issues? should this process be avoided during office hours? Depending on the size of the indexes it could take a while to complete so yes not suggested to run during office hours as to impact performance.
In addition, since this is the first time and you may have some catch up to do, I would first use a few days to catch up (during every maintanance window, reorganize the fragmented indexes with either highest fragmentation or highest usage, until time runs out). After that, set up a scheduled job to ensure that index maintenance will be done as and when needed going forward.
April 13, 2016 at 1:46 am
Maybe add a filter on page count too?
April 13, 2016 at 2:57 am
Why write your own index maintenance script when there are so many good ones out there?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 13, 2016 at 2:47 pm
And another absolutely worth checking out:
I wrote a review of it here[/url].
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 13, 2016 at 7:32 pm
Thanks for all the replies, least I know I'm on the right track for doing this... I'll have a look at those links provided as well.
Thanks again
April 13, 2016 at 7:54 pm
It's a funny thing... I no longer reindex or reorganize on my main production box. Log files were getting blown out by the maintenance actions, the database was expanding when I least needed it to, and, every Monday (the day after the index maintenance jobs ran) for the first half of the day, we were getting some pretty serious blocking because of page splits.
I thought the man was drinking bong water when I first saw his presentation on it but, to make a really long story shorter, I tried Brent Ozar's idea of just not doing any index maintenance and I haven't done any for about 4 months now. I still rebuild statistics (can't get away without that on most of our tables) and I was horribly skeptical but it appears to be working on all of the databases just fine.
Of course, changing the FILL FACTOR on a lot of the indexes (and most of the 3rd party index routines don't do that auto-magically) would help with the blocking that occurs due to index page splits right after the rebuilds but I've got some fairly large tables and some fairly large indexes to go along with them. It appears that the page splits have made sort of a "natural" FILL FACTOR and only in the spots where they're needed instead of wasting all the space of a 90, 80, or 70% FILL FACTOR.
NO. I'm not recommending it as a panacea. As always, IT DEPENDS. For example, if you shrink a file or a database, you're going to need to rebuild indexes because that's a totally different type of fragmentation (totally random instead of somewhat "ordered" by normal activity).
But, without me doing anything else, read, writes, and CPU have been dropping steadily since I've stopped all normal index maintenance. CPU usage across the 16 core has dropped from 18-25% over the course of the day to 8 to 15%. Log files have stopped exploding when one of the clustered indexes on a large table has edged into a REORGANIZE category (instead of exploding to more than 30GB, 8GB has been enough, which is important for restores).
Again, not a panacea and there's no guarantee from me or him that it will work in your situation but, despite my initial nearly rabid skepticism, it's been working for me.
Here are a couple of the links that I came across for Brent's suggestion to stop worrying about fragmentation.
https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
https://www.brentozar.com/archive/2013/09/why-index-fragmentation-doesnt-matter-video/
Yep. I thought he was drinking bong water when I first saw these but gave it a try just to see what would happen and I was amazed at the good results I've had so far (~4 months, going on 5) and we have millions of transactions on some of the databases each day. It doesn't seem to have hurt the batch runs, either.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 13, 2016 at 10:22 pm
Interesting Jeff, we were in the same boat as you regarding our indexes. I think we are doing this now as a reactive exercise as it fixed our problem (and has continually fixed it by reorganising/rebuilding on an ADHOC basis). We previously never really reorganised or rebuilt index's either.
In saying that We've been getting issues with performance and upon investigating the SP they were very basic... 3 to 4 tables joined together with no complex code other than returning values.
the indexes being used in this case were all fragmented at a value over 90% (think actually closer to 99%)... After rebuilding all the indexes that were used it fixed our issue. SP now returns instantly.
Our table counts however are in the very high millions and also billions.
Hence we were looking at a Strategy to somewhat cater for this in the future
April 14, 2016 at 2:17 am
Tava (4/13/2016)
After rebuilding all the indexes that were used it fixed our issue. SP now returns instantly.
If you did REBUILDs instead of just REORGANIZEs, then it's very likely that the REBUILDs didn't actually do anything for you except take a lot of time. REBUILDs inherently rebuild statistics and THAT's the real key to the success I've had in NOT doing any index maintenance... I make sure that the stats are up to date twice a week.
I'm setting up to be a bit more aggressive in that area.
Most of the indexes on the heavily modified tables are fragmented much more than 90% and the queries have not been affected. This holds with another "movie" that Kendra Little put together on the subject. I don't have the link just now and I'm headed for bed at this very late hour.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 14, 2016 at 7:38 am
I've seen this argument before, and I actually lean towards what Jeff & Brent are advocating... if your queries are primarily seek. If you've tuned the system so you're largely avoiding scans, then defragmenting is lots of work for very little return. On the other hand, scans are going to get worse and worse and worse the more those indexes fragment. So, funny enough, if you have a well managed system, you can stop managing it, but if you have a poorly managed system, you need to manage more.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 14, 2016 at 8:12 am
Jeff, the next time you and I are together FTF (SQL Saturday or PASS Summit?) we need to have a long, detailed talk about the things you state below. I wish I had time to dive into this right now but sadly I don't. But you mention a number of things that are either conflicting or just don't seem right at first (or even second) blush. Clearly you are ignoring the long-term effects of ~half-full pages (not mentioned by Brent), and also the near-term effects on RAM usage (mentioned by Brent via 50%FF point). I also wonder if your CPU use has dropped because you are doing more IO due to less data per active page (I have seen that VERY often, with many different causes).
As you say, it depends. And as Grant says seek-dominant stuff can be OK with this mantra (still get hit with the RAM/long-term size issue tho). I truly look forward to our discussion, and hope I can learn something that I can apply to appropriate clients!! Perhaps it will lead to my first-ever blog post. 😎
Jeff Moden (4/13/2016)
It's a funny thing... I no longer reindex or reorganize on my main production box. Log files were getting blown out by the maintenance actions, the database was expanding when I least needed it to, and, every Monday (the day after the index maintenance jobs ran) for the first half of the day, we were getting some pretty serious blocking because of page splits.I thought the man was drinking bong water when I first saw his presentation on it but, to make a really long story shorter, I tried Brent Ozar's idea of just not doing any index maintenance and I haven't done any for about 4 months now. I still rebuild statistics (can't get away without that on most of our tables) and I was horribly skeptical but it appears to be working on all of the databases just fine.
Of course, changing the FILL FACTOR on a lot of the indexes (and most of the 3rd party index routines don't do that auto-magically) would help with the blocking that occurs due to index page splits right after the rebuilds but I've got some fairly large tables and some fairly large indexes to go along with them. It appears that the page splits have made sort of a "natural" FILL FACTOR and only in the spots where they're needed instead of wasting all the space of a 90, 80, or 70% FILL FACTOR.
NO. I'm not recommending it as a panacea. As always, IT DEPENDS. For example, if you shrink a file or a database, you're going to need to rebuild indexes because that's a totally different type of fragmentation (totally random instead of somewhat "ordered" by normal activity).
But, without me doing anything else, read, writes, and CPU have been dropping steadily since I've stopped all normal index maintenance. CPU usage across the 16 core has dropped from 18-25% over the course of the day to 8 to 15%. Log files have stopped exploding when one of the clustered indexes on a large table has edged into a REORGANIZE category (instead of exploding to more than 30GB, 8GB has been enough, which is important for restores).
Again, not a panacea and there's no guarantee from me or him that it will work in your situation but, despite my initial nearly rabid skepticism, it's been working for me.
Here are a couple of the links that I came across for Brent's suggestion to stop worrying about fragmentation.
https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
https://www.brentozar.com/archive/2013/09/why-index-fragmentation-doesnt-matter-video/
Yep. I thought he was drinking bong water when I first saw these but gave it a try just to see what would happen and I was amazed at the good results I've had so far (~4 months, going on 5) and we have millions of transactions on some of the databases each day. It doesn't seem to have hurt the batch runs, either.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 14, 2016 at 9:30 am
TheSQLGuru (4/14/2016)
Jeff, the next time you and I are together FTF (SQL Saturday or PASS Summit?) we need to have a long, detailed talk about the things you state below. I wish I had time to dive into this right now but sadly I don't. But you mention a number of things that are either conflicting or just don't seem right at first (or even second) blush. Clearly you are ignoring the long-term effects of ~half-full pages (not mentioned by Brent), and also the near-term effects on RAM usage (mentioned by Brent via 50%FF point). I also wonder if your CPU use has dropped because you are doing more IO due to less data per active page (I have seen that VERY often, with many different causes).As you say, it depends. And as Grant says seek-dominant stuff can be OK with this mantra (still get hit with the RAM/long-term size issue tho). I truly look forward to our discussion, and hope I can learn something that I can apply to appropriate clients!! Perhaps it will lead to my first-ever blog post. 😎
Jeff Moden (4/13/2016)
It's a funny thing... I no longer reindex or reorganize on my main production box. Log files were getting blown out by the maintenance actions, the database was expanding when I least needed it to, and, every Monday (the day after the index maintenance jobs ran) for the first half of the day, we were getting some pretty serious blocking because of page splits.I thought the man was drinking bong water when I first saw his presentation on it but, to make a really long story shorter, I tried Brent Ozar's idea of just not doing any index maintenance and I haven't done any for about 4 months now. I still rebuild statistics (can't get away without that on most of our tables) and I was horribly skeptical but it appears to be working on all of the databases just fine.
Of course, changing the FILL FACTOR on a lot of the indexes (and most of the 3rd party index routines don't do that auto-magically) would help with the blocking that occurs due to index page splits right after the rebuilds but I've got some fairly large tables and some fairly large indexes to go along with them. It appears that the page splits have made sort of a "natural" FILL FACTOR and only in the spots where they're needed instead of wasting all the space of a 90, 80, or 70% FILL FACTOR.
NO. I'm not recommending it as a panacea. As always, IT DEPENDS. For example, if you shrink a file or a database, you're going to need to rebuild indexes because that's a totally different type of fragmentation (totally random instead of somewhat "ordered" by normal activity).
But, without me doing anything else, read, writes, and CPU have been dropping steadily since I've stopped all normal index maintenance. CPU usage across the 16 core has dropped from 18-25% over the course of the day to 8 to 15%. Log files have stopped exploding when one of the clustered indexes on a large table has edged into a REORGANIZE category (instead of exploding to more than 30GB, 8GB has been enough, which is important for restores).
Again, not a panacea and there's no guarantee from me or him that it will work in your situation but, despite my initial nearly rabid skepticism, it's been working for me.
Here are a couple of the links that I came across for Brent's suggestion to stop worrying about fragmentation.
https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
https://www.brentozar.com/archive/2013/09/why-index-fragmentation-doesnt-matter-video/
Yep. I thought he was drinking bong water when I first saw these but gave it a try just to see what would happen and I was amazed at the good results I've had so far (~4 months, going on 5) and we have millions of transactions on some of the databases each day. It doesn't seem to have hurt the batch runs, either.
Heh... actually, I'm counting on the "half full" pages in the non-clustered indexes. I do agree that those are terrible for clustered indexes but that's where following the rules for clustered index key candidates has really paid off for me.
And yeah, I agree. It DOES sound quite contrary to logic, common sense and, certainly, what have been touted as "best practices" for literally decades.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply