December 12, 2017 at 9:57 pm
HI,
I got this from the ola.hallengren web site and when I run it, it doesn't appear to do anything, but if I run reorganise from the GUI it works fine. The index goes from 99% to 0.8%
EXECUTE dbo.IndexOptimize
@databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE',
@FragmentationHigh = 'INDEX_REORGANIZE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'
Can you let me know if there is something not right
Kris
December 12, 2017 at 11:57 pm
How big is the index? iirc, by default Ola's script ignores small indexes that don't need to be rebuild/reorganised.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 13, 2017 at 1:04 am
Hi,
if you try to execute the script, nothing happend.
and if you execute via GUI, Maintenance Plans or "execute" on the stored procedure?
We use the scripts vom Ola , and everything works fine.
Kind regards,
Andreas
December 13, 2017 at 1:08 am
I think the benchmark is 1000 pages, if your index is less than this then the script will ignore the index, as previously mentioned. If you are running the reorganise command from the GUI I presume you aren't taking this in to account.
December 13, 2017 at 1:20 am
Ok. That makes sense. Is there something I need to add to not take into account if it's less than 1000. In fact just if it's fragmented at all regardless of pages
Kris
December 13, 2017 at 1:38 am
The number of pages are important. if you have an index that is less than 8 pages, it will be in a mixed extent, and all the reorganizes/rebuilds in the world are not going to stop the index being fragmented. Therefore, the effort to work on these vastly outweighs the performance benefit (if any!).
To answer your question, PageCountLevel is the setting you want. this lets you set a page size within the script.
the default is 1000 pages as this "based on Microsoft's recommendations" as per his webpage.
One other school of thought, don't fight fragmentation because you've heard it's bad.
Get all your data in memory and ensure you have up-to-date stats, and you'll do just as well. **
EDIT:
** combats external fragmentation!
As per link: https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
December 13, 2017 at 2:11 am
DimPerson - Wednesday, December 13, 2017 1:38 AM... the default is 1000 pages as this "based on Microsoft's recommendations" ....
Or, more accurately, a random number that Paul Randal picked one morning when someone forced him to answer the question. Yes, that's true, I got it from Paul himself.
That said, defragging smaller than 100 pages really is a waste of time. 1000 is as good a starting point as any.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 13, 2017 at 2:31 am
Grant Fritchey - Wednesday, December 13, 2017 2:11 AMDimPerson - Wednesday, December 13, 2017 1:38 AM... the default is 1000 pages as this "based on Microsoft's recommendations" ....
Or, more accurately, a random number that Paul Randal picked one morning when someone forced him to answer the question. Yes, that's true, I got it from Paul himself.
That said, defragging smaller than 100 pages really is a waste of time. 1000 is as good a starting point as any.
That is a great fact! 🙂
December 13, 2017 at 3:59 am
The root reason is that fragmentation impact large range scans from disk. Emphasis *large* and *disk*. If the table is small, then fragmentation will have minimal effect and can probably be ignored (stats updates though can't be ignored). If the table is in memory most of the time (which small, frequently used tables will be), fragmentation will have minimal effect.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 13, 2017 at 6:41 am
Kris-155042 - Tuesday, December 12, 2017 9:57 PMHI,I got this from the ola.hallengren web site and when I run it, it doesn't appear to do anything, but if I run reorganise from the GUI it works fine. The index goes from 99% to 0.8%
EXECUTE dbo.IndexOptimize
@databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE',
@FragmentationHigh = 'INDEX_REORGANIZE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'Can you let me know if there is something not right
I could be wrong but it would appear that you defragged the index using a 0 or 100 Fill Factor. Since the index was logically fragmented, that means that information is added to the index in an order other than the order implied by the keys. That also means that the index will become 99% fragmented almost instantly even with small numbers of additional rows/modifications and suffer a whole lot of "bad" page splits.
Other than possible space consolidation for indexes that suffer deletes or secondary expansive (which are expensive) updates (a variable size column is updated to contain wider data), the big purpose behind defragging indexes is to prevent page splits, which really slow things down and can take a major toll on the log file.
To wit, if you're not going to take the time to figure out what a decent FILL FACTOR for the index is, for the love of performance, stop defragging indexes using a 0 or 100 FILL FACTOR at all. Seriously... I've been doing a huge amount of testing lately and defragging indexes that aren't based on "ever increasing" keys using a 0 or 100 FILL FACTOR does cause some major performance issues and can actually cause major blocking and even timeouts for the GUI.
I'll also state that I'm finding that percent of fragmentation is usually the wrong thing to use as an indicator or whether or not to defrag an index, especially on larger indexes, because by the time it gets to even 5% fragmentation, the largest number of page splits has already occurred. Instead, look at the percent of page fullness. For example, if you've taken the time to assign a FILL FACTOR of, say, 70 or 80, then you want to monitor page fullness. If it drops quite a bit below the FILL FACTOR, then you want to defrag to recover disk space and memory space. If it gets above 90 or 95 percent full (depending on row size, which is a whole 'nuther story and I'm working on that, as well), then you're about to start suffering from the "bad" kind of page splits and it's time to defrag back down to the FILL FACTOR of 70 or 80.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply