July 7, 2012 at 9:07 am
Hi All
A Question regarding Index Fragmentation
If I have a table with and Index that is heavily fragmented, I know that the recommended course of action is to rebuild the Indexes.
If by using the following DMV's
sys.dm_db_index_operational_stats & sys.dm_db_index_usage_stats
And I see no record of user_scans or range_scan_count
Is it still necessary to rebuild the Index?
Thanks
July 7, 2012 at 9:58 am
Maybe.
It's not just about the effect on scans, it's also about the page density. If that's lower than desired, the index takes more space than it otherwise needs and is hence less efficient.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 7, 2012 at 10:28 am
GilaMonster (7/7/2012)
Maybe.It's not just about the effect on scans, it's also about the page density. If that's lower than desired, the index takes more space than it otherwise needs and is hence less efficient.
Is this Internal Fragmentation?
July 7, 2012 at 11:20 am
It's sometimes called that, I personally hate the term because it's ambiguous.
It's low average page density.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 7, 2012 at 11:40 am
GilaMonster (7/7/2012)
It's sometimes called that, I personally hate the term because it's ambiguous.It's low average page density.
Thanks
July 7, 2012 at 11:47 am
Personally, i think people consider just the avg fragmentation counter to decide whether to rebuild the indexes or not.I think using this counter alone is a bad idea.It mainly impcats when you read the data from the disk. However, more important in my personal experience is the avg page space used in percent(Gail Mentioned it as page density). This impacts all the resources like disk reads,logical reads,cpu as well as memory. Thus you should look at this parameter as well.Ignoring this is not a good idea.
e.g. You have a table with 10000 pages and say avg page space used in 70%. Then the same data could be stored in just 7000 pages. This will save your logical,disk read.Also, the most important thing is that it will use less space in your cache.Just 7000 pages instead of 10000 pages.This is a saving of 3000 pages which could be used for other purposes.
For more details please read the following blog.
http://gullimeelsqlsybase.wordpress.com/2012/07/05/index-fragmentation-in-sql-server-2008/
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
July 7, 2012 at 11:58 am
GilaMonster (7/7/2012)
It's sometimes called that, I personally hate the term because it's ambiguous.It's low average page density.
If I find that my Index does have a Low Average Page Density,
Can I avoid this by using a Higher Fill Factor when/if I rebuild?
Thanks
July 7, 2012 at 12:08 pm
If I find that my Index does have a Low Average Page Density,
Can I avoid this by using a Higher Fill Factor when/if I rebuild?
Are you using fillfactor for your indexes and if yes what is the current fillfactor value?High value of fillfactor may cause more page splitting and thus low page density.
The low page density is caused mainly by the inserts /deletes and updated and if you have frequent dml's then you will have low page density (but again depends on frequency and type of dml's) either by page splits or be deletion os the data on existing pages..Updates which could cause the row size toi increase or decrease.
To fix this just use the organize the indexes. It will take care of page space and thus making each page as full as possible.Rebuild offcourse will reduce it and make the pages as full as possible but at the same time it will remove the logical fragmentation.
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
July 7, 2012 at 12:11 pm
SQLSACT (7/7/2012)
If I find that my Index does have a Low Average Page Density,Can I avoid this by using a Higher Fill Factor when/if I rebuild?
Probably not (unless your fill factor is set to something stupidly low, like 50%).
Low page density comes from page splits, the same thing that causes logical fragmentation, or from lots of deletes. You rebuild to fix it and may consider lowering the fill factor slightly (80 or 90 percent instead of 100/0 which is the default) if it is problematic and you can't rebuild often enough.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply