October 30, 2008 at 4:06 am
Hi,
I checked one of my table for index rebuild -
I run the DBCC showcontig(TableName)
Before
- Logical Scan Fragmentation ..................: 75.00%
- Extent Scan Fragmentation ...................: 75.00%
Then I run the DBCC DBReindex(TableName) , I didn't pass it in for fill factor because fill factor is 0
I got the following result -
TABLE level scan performed.
- Pages Scanned................................: 6
- Extents Scanned..............................: 4
- Extent Switches..............................: 3
- Avg. Pages per Extent........................: 1.5
- Scan Density [Best Count:Actual Count].......: 25.00% [1:4]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 75.00%
- Avg. Bytes Free per Page.....................: 727.5
- Avg. Page Density (full).....................: 91.01%
** I am happy with 'Logical Scan' , but why 'Extent Scan' is still 75%, What is Extent Scan ?
Thanks.
October 30, 2008 at 8:14 am
Books Online
http://msdn.microsoft.com/en-us/library/aa258803.aspx
Understanding Logical Scan Fragmentation and Extent Scan Fragmentation values.
Logical Scan Fragmentation and, to a lesser extent, Extent Scan Fragmentation values give the best indication of a table's fragmentation level. Both these values should be as close to zero as possible (although a value from 0% through 10% may be acceptable). It should be noted that the Extent Scan Fragmentation value will be high if the index spans multiple files. Both methods of reducing fragmentation can be used to reduce these values.
From own SSC
http://www.sqlservercentral.com/articles/Administering/fragmentation101/1346/
External fragmentation
When tables grow as new data is inserted into them the ideal situation is that pages get allocated to an extent, then when that extent is filled up a new extent is allocated, then filled up with pages and so on. This way all the pages will always be located adjacent to eachother and readingeight pages will in a best case scenario only require reading one extent from disk.
Reality, however, is seldom ideal. New rows, when inserted, are bound to cause page splits sooner or later, as will updates that cause variable sized rows to grow too large to fit on the original page. When a new page is created from a page split it will be allocated in the same extent as the original page if there is room for it. If the extent is already full, which will normally be the case in a production system, a new extent will be allocated to the index and the new page will be placed there. The new page will not be contigous to the original page, so when following the page chain using the previous and next pointers in the page headers SQL Server will need to perform an extra extent switch. In a worst case scenario reading eight pages in order might require seven extent switches, and eight extents must be read from disk.
Note: The value for Extent Scan Fragmentation is not really important. This is because the algorithm that calculates it does not work for indexes that span over multiple files. According to some Microsoft sources, in the next version of SQL Server, DBCC SHOWCONTIG will not even include this metric.
so don't worry too much about it
October 30, 2008 at 4:26 pm
Thanks Jerry,
That explains it.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply