Index 66% fragmented - can't reduce fragmentation by rebuild...?

  • I have an index that is 66.67% fragmented and I've tried rebuilding it and the fragmentation level remains the same:

    TABLE level scan performed.

    - Pages Scanned................................: 3

    - Extents Scanned..............................: 3

    - Extent Switches..............................: 2

    - Avg. Pages per Extent........................: 1.0

    - Scan Density [Best Count:Actual Count].......: 33.33% [1:3]

    - Logical Scan Fragmentation ..................: 66.67%

    - Extent Scan Fragmentation ...................: 66.67%

    - Avg. Bytes Free per Page.....................: 363.0

    - Avg. Page Density (full).....................: 95.52%

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    I'm assuming it is because the index is so small? Only ~360 rows in that table so perhaps that's just how it had to create the index?

    Anyway I'm new to the world of index performance improvements or performance tuning in general, so I'm just looking for some good practices / common sense, all that fun stuff.

    Thanks!

  • That table only has 3 pages in it. Don't worry about the fragmentation % for tables with less than 100 or so pages in them.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Would you mind explaining why it is fragmented?

    Does it have to do with how the index is built and because it is such a small index it just appears fragmented due to the way the calculation is done?

    Is it REALLY fragmented, but again, because it has so few pages, the system just doesn't bother with doing anything about it?

    Just wondering why is all.

    Thanks!

  • You asked...

    The first 8 pages that are allocated to a page are allocated from mixed extents (extent = 8 consecutive pages). Once 8 pages have been allocated, then SQL will start allocating full extents to the table when it needs more space.

    Since that table only has 3 pages, they are taken from any extent that has a free page in it and hence are unlikely to be consecutive. With tables that small, it's a size optimisation and the fact that they are not in order does not make much difference on that level.

    Having the pages of an index consecutive matters for large scans, where lots of the pages of the table are needed at once, and it makes those large scans faster because the disk head will be in the right position for the reads and won't have to jump all over the drive to find the pages

    The index is fragmented, it's not that the calculation is wrong. The calculation for fragmentation is the % of pages that are out of order. An out of order page is one that's higher in index key order but has a lower page ID.

    So, if an index had (in index key order) the pages 15, 18, 36, 22, 24; then page 22 would be considered out of order (and is the only page so considered) and so the logical fragmentation of that index would be 20%

    Make any sense?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • That does make sense.

    Small indexes don't warrant the extra work of locating their own space just for them, so if it is less than 8 pages SQL Server just looks around and says "Ah, here's some space" and stuffs them wherever.

    I don't know what Extents are though so reading up on that....

    So Extents are just 8 pages that are ordered together / contiguous on the drive?

    It says that in order to work with pages more efficiently it stores them in groups of 8 called Extents.

    SO... I'm assuming that means data that is stored across MANY pages would be broken down into many / 8 Extents and ideally stored in sequence on the drive to optimize read/write operations?

    In this case, it didn't need all 8 pages of an Extent so this index didn't get its own Extent?

    SQL Server just grabbed some partially used Extents (maybe it had 1 page, maybe it had 7) and tossed the 3 pages into one or more Extents as it wanted?

    Do I understand that correctly?

  • Pretty much.

    An extent is just that, 8 consecutive pages in the data file. After a table grows to a certain size (I think it's 24 pages in SQL 2005), then all allocations to that table are full extents.

    Your specific index currently has 3 pages each in a different extent, as evidenced by these lines

    - Extents Scanned..............................: 3

    - Extent Switches..............................: 2

    - Avg. Pages per Extent........................: 1.0

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you for your help, this has been most informative!

    I'm never going to know it all, but it never hurts to know a bit more here and there!

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply